Loading the player
In recent days, the story of a man from San Francisco in the United States, whose Google account was disabled in 2021 because he took several photos of his son’s groin for medical reasons, has been widely reported in Italian and international newspapers in recent days. According to Google, the photos were in fact child pornography, and for this reason the man was also reported to the police. The story was first told Sunday in an article published in the New York Times.
The incident speaks volumes about the limitations of the technology used by Google, which automatically identifies images shared through its services that may be associated with sexual abuse of minors, as well as the possible consequences of moderators’ strict enforcement of internal rules. who are required to report any behavior that is considered suspicious in order to prevent possible crimes. Google has commented on this story, stating that it complies with US federal child pornography law.
The facts are from February 2021 and relate to some photos taken by Mark (who asked to be identified only by his first name for fear of damaging his reputation) and his wife to his son, who is a few years old. As the boy experienced pain in his penis, which was swollen, Mark told the New York Times that he took several photos of the groin area with his smartphone using an Android phone developed by Google. Friday night, due to coronavirus pandemic restrictions, his wife scheduled an online doctor’s consultation for the next day: the nurse who made the appointment asked her to send in some photos so the pediatrician could see them first. So his wife took more, again with Mark’s phone, in which her husband’s hand was visible, indicating a tumor.
Two days after he took these photos, Mark received a notification that Google had warned him that he had disabled his account due to “dangerous content” which was described as “a serious violation of Google’s policies and potentially illegal.” The Gmail service that Mark started in the mid-2000s and the Google Fi phone service (available only in the US) were also disabled, preventing him from making calls and using the Internet. In addition to emails that are over fifteen years old, Mark also lost photos and videos that were stored in the Google cloud.
Mark, a programmer who in his career also worked on developing a tool to remove online content that users find problematic, asked Google to reinstate his accounts, explaining the situation: the request, however, was denied without explanation, he said. He later learned that the San Francisco police were investigating him because of another video stored on his phone that had been flagged by Google as potentially dangerous.
Now Mark is no longer under investigation, but his Google accounts are still disabled. After the story was published, a company spokesperson reiterated that Google is following US federal child pornography law and is using various artificial intelligence technologies and systems “to identify and remove them from [sue] platforms.” Spokesperson Krista Maldon added that moderators who need to review material deemed potentially suspicious are being trained by healthcare professionals to recognize irritation, rashes or other medical problems in nudity, even if they are not medical; According to Maldon, no doctors or medical professionals were consulted in Mark’s case.
– Read also: More and more states want to check the data of their citizens
The New York Times reported that the same thing that happened to Mark happened on the same days to a man in Houston, Texas. According to John Callas, a technology expert with the Electronic Frontier Foundation for Civil Liberties in the Digital Age, there could be “tens, hundreds, thousands” of such cases.
In 2021 alone, Google actually claimed to have collected over 600,000 reports of content deemed child pornography by its systems and disabled over 270,000 accounts for violating relevant rules.
Children’s rights activists argue that the collaboration of major technology platforms is essential to combat the online distribution of child pornography. However, Mark’s case shows how this type of control applies not only to the active sharing of images identified as potentially child pornography, but extends to all content that falls under what Callas defined as “private realm.”
However, according to technical experts, these control and protection systems are error-prone and can have serious and unexpected consequences for people who did not intentionally commit harmful acts.
Daniel Kahn Gillmore, technical expert for the American Civil Liberties Union – one of the most active US organizations for the protection of human rights and freedoms – said that companies like Google have access to a huge amount of people’s personal data and so on. time “they don’t have context anyway” to understand how their life actually works. And often, according to Gillmore, analysis of potentially harmful content is complicated by the limited skills of moderators, and technology does not seem to be a solution to the problem, but can actually exacerbate it.
Google’s statement says that:
Child pornography (CSAM) material is disgusting and we are committed to preventing its distribution on our platforms. We follow US law in defining what CSAM is and use a combination of hash matching technology and artificial intelligence to identify and remove it. In addition, our dedicated Child Safety team validates the content reported and consults with pediatric experts so that we can determine where users can seek medical attention.
Conclusion
If you liked reading The story of a man reported by Google who photographed his son’s genitals – Il Post
Please share with your friends and family.