In an interview published earlier this week, Facebook CEO Mark Zuckerberg indicated that his company is scanning people’s Messenger accounts for content that doesn’t abide by their policy. The comment raised questions about how broadly that scanning goes, and if the company is looking through everyone’s messages in general.
Turns out, they are.
In a statement to Bloomberg, Facebook said it scans private Messenger accounts for potential abuse, using the same tools as police to general website. The automated tools look for content that doesn’t abide by “community standards,” like child exploitation imagery, viruses and malware, or violent propositions. People can already report posts and messages that they feel are inappropriate, upon which the “community operations” team takes a look.
A Facebook spokesperson said: “For example, on Messenger, when you send a photo, our automated systems scan it using photo matching technology. Facebook designed these automated tools so we can rapidly stop abusive behavior on our platform.”
This seems like an important measure in order to prevent abuse online, but the reveal comes at a time when Facebook is under intense negative scrutiny in the wake of news that said the data of over 50 million users had been given to an analytics firm without user consent.
In the meantime, Facebook is scrambling to make its privacy policies clear. This seems like a good chance to rectify at least a part of that.