Apple is making it easier to protect children against sexually explicit content in messages by expanding its Communication Safety features in the initial iOS 18.2 beta with a new option that not only prevents them seeing nude photos and videos in messages by blurring them, it also allows them to reports the messages to Apple.
The Guardian reports that Communications Safety feature in iOS doesn’t search for photo matches in databases or report back to Apple or law enforcement. Instead, machine learning analyzes the pictures being sent and received in the Messages app to identify images that might include nudity, blurring out the images and requiring the user to take additional steps to view them. No information on these photos is ever sent from the child’s device.
Now, Apple is adding an additional feature to Communication Safety in iOS 18.2 to allow kids to report any unsolicited nudes that they’re receiving. While the feature is initially limited to users in Australia, Apple says it “will be released globally in the future.”
The iPhone will automatically detects images and videos that contain nudity that children might attempt to receive or send in the iMessage, AirDrop, FaceTime and Photos apps. The detection happens on the device to protect privacy.
If a nude image is detected, the child must view two intervention screens before they can proceed, and are given the offer of resources or a way to contact a parent or guardian.
When the warning appears, users will also have the option to report the images and videos to Apple.
The device prepares a report containing the sensitive images or videos, as well as any messages sent immediately before and after the image or video was received or sent. The report will include the contact information from both accounts, and users can fill out a form describing what happened.
The report will be reviewed by Apple, which can take action on an account – such as disabling that user’s ability to send messages over iMessage – and also report the issue to law enforcement.
The feature is optional and it won’t compromise the security of iMessage’s end-to-end encryption, and Apple will not have access to the content of iMessage conversations unless one of the participants chooses to make a report.
Picking Australia as the first region to receive the new feature coincides with new codes in the country coming into force. By the end of this year, tech companies will be required to police child abuse and terror content on cloud and messaging services that operate in Australia.