Apple feature designed to protect children by scanning images kicks up controversy

Apple feature designed to protect children by scanning images kicks up controversy

A hot potato: During WWDC 2021 in June, Apple unveiled its upcoming device operating systems. It made a bid deal about the expanded privacy features of iOS, iPadOS, and macOS Monterey. What it didn't elaborate on is its expanded protections for children, and for good reason. At face value, Apple's protections for children run contrary to its tough stance on user privacy.


In the most recent iOS 15 preview, Apple rolled out some features that have many privacy advocates, including the Electronic Frontier Foundation (EFF) crying "backdoor." The features are part of Apple's effort to crack down on Child Sexual Abuse Material (CSAM).


The first feature uses machine learning to look for potentially sensitive images within the Messages app of children under 12. If inappropriate material is received, the picture is blurred, and a notification tells them it is okay not to view the photo along with links to "helpful resources." The child is also informed that if they do open the image, their parents will be notified. It also works if the child attempts to send an explicit photo. They will receive a warning that if they send the image, their parents will receive a notification.


Apple says that all AI processing is done on the device to protect users' privacy, and nothing is ever uploaded to Apple servers. It will work on all Apple device operating systems.



pic.twitter.com/yN9DcTsBNT


— Edward Snowden (@Snowden) August 6, 2021

The second is called CSAM de ..

Support the originator by clicking the read the rest link below.