Increasing cases of child sexual abuse material distributed online have prompted calls to crack down on perpetrators by removing the veil of anonymity that users have when signing up for social media accounts. I am.
so inquiry Regarding the ability of law enforcement agencies to tackle child exploitation, Mark Jillsack, a senior social justice advocate of the Unification Church, finds it very difficult for police to track down people who are committing horrific crimes against children. Said.
In particular, of the 21,000 reports of online child exploitation received by the Australian Federal Police (AFP) in 2020, only 191 were charged with a total of 1,847 crimes.
Zirnsak supports mandating personal identification requirements on online platforms prior to account creation, allowing police to quickly determine the identity of an individual reported to have distributed child abuse material, if necessary. Insisted.
“If you misuse your account, law enforcement can identify you and don’t waste a lot of time doing it.”
Currently, the idea of providing ID to Big Tech can be divided into two main scenarios. One is a scenario where an individual’s name is exposed, and the other is a scenario where the name remains private but can be accessed by police during an investigation.
Zirnsak has accused stubborn advocates of online anonymity for ignoring the health and well-being of exploited innocent children at the root of the problem.
“Unfortunately, the current public debate on this is very disappointing … Frequent reading of their submissions in this area of online regulation shows that the abuse of children in the online field is a violation of human rights. I don’t admit, “he said.
Groups such as Digital Rights Watch have previously spoken in favor of maintaining online anonymity based on the basic principles of freedom of speech.
“Anonymity is absolutely essential for the free and open Internet to work,” DRW said. Said..
The DRW quoted the government’s overkill concerns that citizens risk silence if police are empowered to identify who is posting online.
DRW also said it was concerned about the possibility of passing personal data to Big Tech as Facebook confirmed the most serious breach in 2018, especially as data breaches are not unprecedented.
In the same survey, Google and Facebook outlined that they prioritized the development of mechanisms to prevent child abuse material.
Antigone Davis, Facebook’s Global Director of Safety, said:
“First, we’ll build a team of professionals working in this area,” Facebook explained in a previous submission. “In recent years, more than 35,000 people have been working on safety and security.”
“Second, the technology invested to detect and eliminate the cutting edge of child abuse … Australian Federal Police are reviewing these algorithms and using them as part of their work to protect children in Australia. We are using these technologies along with many other examples of artificial intelligence. “
However, the high-tech approach to removing content and the report-based mechanisms that underpin the idea of validated identity sign-up apply only to materials posted in public forums.
However, content sent through many of the end-to-end encrypted chat services currently deployed, including Facebook’s existing Whatsapp, remains hidden in a closed room.
Raised by Labor Party Ann Ally, who pointed out police and tech giants, this concern remains untraceable if child abuse material is filtered through a private end-to-end encryption channel.
“(These) services … people can set up closed groups that can share these images, such as entering a room, closing the door, locking the back, sharing those images with each other,” she says. I did.
“They are part of that group, so no one there is going to report it.”