Apple Announces New Machine Learning-led Child Safety Features

Apple Announces New Machine Learning-led Child Safety Features

Tech giant Apple has announced a range of new machine learning-led safety measures designed to protect children from exposure to child abuse materials, including child pornography.



The first of these is a new communication safety feature in Apple’s messages app, in which a warning will pop up when a child who is in an iCloud Family receives or attempts to send sexually explicit photos.



Any such images that are received by children be blurred, and a message will come up stating: “may be sensitive.” If the child then taps “view photo,” a different pop-up message will explain that if they choose to view the image, their iCloud Family parent will receive a notification “to make sure you’re OK.” The pop-up will also contain a link to receive additional help. A similar system is in place for sexually explicit photos a child tries to send.



An on-device machine learning system will analyze the image attachments to determine if a photo is sexually explicit. Apple also confirmed that iMessage remains end-to-end encrypted and that it will not have access to any of the messages.



The opt-in feature will be rolled out “later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey,” starting in the US.



The next measure enables Apple to detect child sexual abuse material (CSAM) stored in iCloud photos before reporting them to the National Center for Missing and Exploited Children (NCMEC). New technology in iOS and iPadOS will be used, enabling on-device matching utilizing a database of known CSAM image hashes provided by ..

Support the originator by clicking the read the rest link below.