Apple’s head of privacy, Erik Neuenschwander, answer questions on new child safety policy

Matthew Panzarino, TechCrunch:

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Follow the headline link, check out the second image (four iPhone screens). This does an excellent job showing off the CSAM mechanism implemented by Messages. The CSAM announcement raises so many issues, I think it’s worth getting a sense of this part of the process, to help distinguish it from the other half, “CSAM detection in iCloud Photos”.

If you read no other part of the interview, do scan for this question and Paul Neuenschwander’s response:

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference if pressured or asked by a government to compromise the system?

To me, this goes to the heart of a lot of the privacy concern. There’s a lot here.

The system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account.

The way I read this is that Apple passes the vouchers back along to law enforcement. Not clear to me what’s in those vouchers, or if a user is notified of vouchers being sent. This whole thing feels very Orwellian.