SecureMac, Inc.

Checklist 332: Privacy, Security, and WWDC23 Part II

June 15, 2023

Apple researching ways to combat deepfake threats. Help for developers on App Store Privacy Labels. More Vision Pro privacy features.

Checklist 332: Privacy, Security, and WWDC23 Part II

Apple and the future of AI scams

If you’ve followed this podcast for a while, you know we’ve been thinking a lot about the cybersecurity risks of AI. Checklist 329: Voice Clones and OS Updates included a discussion of how AI voice cloning could bypass voice verification systems at banks. Checklist 327: A.I. Dangers and Working Remote Securely covered the dangers of that same technology being used to perpetrate phone scams.

It seems that we’re not the only ones worried about AI: In a recent Fast Company interview, Apple VP of Software Engineering Craig Federighi says that Cupertino is concerned as well—in particular, about the rise of deepfakes:

AI-generated audio and video that can make it look like anyone is saying or doing anything. As AI tools become more accessible in the years ahead, deepfakes could increasingly be used in so-called social engineering attacks, in which the attacker persuades a victim to hand over valuable data by tricking them into thinking they are communicating with someone they’re not.

The good news is that Apple is already working on ways to protect users from such attacks. According to Federighi:

We want to do everything we can to make sure that we’re flagging [deepfake threats] in the future: Do we think we have a connection to the device of the person you think you’re talking to? These kinds of things. 

What’s in your app?

When developers build a new app, they almost never code it up from scratch. They use other people’s tools, software development kits (SDKs), and APIs to speed their work and improve their app’s performance.

This is a good thing: It enables better apps for end users and faster time-to-market for software development shops. 

But it also creates a fundamental challenge for Apple developers: If you’ve used someone else’s stuff in your app, how can you vouch for their privacy practices when it comes time to submit the App Store Privacy Label report?

Apple is aware of the issue and, according to a report from AppleInsider, has introduced two new tools to address it:

The first tool lets developers create a privacy manifest that details all of the privacy practices of their used third-party SDKs. It is produced in a single standard format that makes it easier for developers to determine what should be displayed on [their own] Privacy Label for the App Store.

As for the second tool, Apple is using SDK signatures as a way to “confirm that an SDK update or file they are downloading belongs to the same developer as the original SDK.” 

This is mostly a development that affects developers—but for everyday users it means that the App Store Privacy Labels may be a bit more trustworthy in the future.

Vision Pro and non-user privacy

Apple’s new mixed-reality headset, Vision Pro, is the first big new Apple product category in years. And as we learn more about it, we’re finding that it’s very “on-brand” in terms of privacy.

On Checklist 331: Privacy, Security, and WWDC23 Part I, we discussed how Vision Pro protects user privacy. To recap, the new device relies on a nifty form of iris-based biometric authentication called Optic ID; processes eye movement data on-device for better privacy; and doesn’t allow apps to access detailed environmental data about the user’s surroundings.

But it seems that Vision Pro doesn’t only safeguard its users. It also protects the safety and privacy of people around them.

Specifically, the Vision Pro screen on the front of the headset will offer visual cues to let people know when a user is looking at them. In addition, it will make it clear when a user is taking a picture or shooting a video. That’s good privacy news for everyone!

Get the latest security news and deals