Facebook has become more than just your common social media app. It now employs artificial intelligence to scrutinize posts with a suicidal intent behind them.
The AI is pretty smart and whenever a suicidal themed post is scanned, it sends them over to a human for further verification. The human moderator then takes action according to the situation. It would either send useful links and resources to a user about mental health or if there is an immediate threat to his own life, then he would contact the close associates who would probably know the whereabouts.
The programme had been out for some time as part of its testing stage. The tool was initially only for the US, but now it would be launching in other countries as well.
The sad news is that the other countries won’t include any European Nation because of the way those countries have set their data protection laws. Thus, Facebook doesn’t have an option but to not collect user data in such a way.
It was apparent that the ideology behind this program extends even further; help this serve a reminder to people that AI is “helping save peoples’ lives today.” This is all in accord with a Facebook post by its own CEO Mark Zuckerberg.
According to him, there already had been more than 100 cases where Facebook had to contact first responders to ask for their help in assisting their friend with having suicidal thoughts.
As per Zuckerberg, “If we can use AI to help people be there for their family and friends, that’s an important and positive step forward.”
With such emphasis on AI, it was expected that Zuckerberg would also share how the AI actually works or simply put traces a person who is in immediate danger. There were simple hints though on how it actually implements.
The AI basically uses the data provided to it earlier in which similar posts were flagged by human users. It also looks for some “tell-tale” signs like “Are you feeling okay?” or “Is there I can do anything to help”. It also lookouts for live video streams with a relatively higher number of reacts, comments or reports at a specific time.
But the AI, work just stops here. The more hassle is for the real humans, who have to go through the flagged posts themselves, and decide its legitimacy and if it requires any action.
It has been proven courtesy research that Artifical Intelligence can really play an important role in assisting finding mental health problems. Machine learning was therefore employed in one recent study with the purpose of finding out if an individual would commit suicide within in the following two years. The accuracy of this study was unsurprisingly 80 to 90 percent.
Though as is the case with any sort of data collection, it is also a cause of concern for some. It is an encroachment on privacy rights and Facebook is a company that had a working relationship with NSA like agencies in the past.
To mitigate this concern the company’s chief security officer Alex Stamos tweeted that, “creepy/scary/malicious use of AI will be a risk forever,” which was why it was important to weigh “data use versus utility.”
For the time being, all we know is that there is nothing we can really do about it unless we stop using Facebook. Let’s just hope there is a way, that Facebook would be preventing the misuse of its Artificially Intelligent System.