>In the After First Unlock (AFU) state, user data is decrypted
Note that this is a slight simplification because, I assume, the reality is irrelevant to understanding the topic:
There are a few different keys [0] that can be chosen at this level of the encryption pipeline. The default one makes data available after first unlock, as described. But, as the developer, you can choose a key that, for example, makes your app's data unavailable any time the device is locked. Apple uses that one for the user's health data, and maybe other extra-sensitive stuff.
How useful do you think this is in practice? Wouldn’t it rely on app-level memory scrubbing and page clearing and such as well, if you wanted to truly make sure it’s unavailable? Do Apple offer APIs to assist there?
To me the biggest takeaway is that Apple is sufficiently paranoid to add this feature. Some people (like John Gruber) advocate for activating bio lockout at the border by squeezing the volume and power buttons. I would say if you’re the type of person who would do this, you should go one step further and power off.
Similarly, if you’re in a situation where you cannot guarantee your phone’s security because it’s leaving your possession, and you’re sufficiently worried, again, power off fully.
> I would say if you’re the type of person who would do this, you should go one step further and power off.
I'd travel with a different device, honestly. I can get a new-in-box android device for under £60 from a shop, travel with that, set it up properly on the other side, and then either leave it behind or wipe it again.
> GrapheneOS provides users with the ability to set a duress PIN/Password that will irreversibly wipe the device (along with any installed eSIMs) once entered anywhere where the device credentials are requested (on the lockscreen, along with any such prompt in the OS).
In a border interrogation scenario, isn't that just likely to get you arrested for destroying evidence?
That’s a significantly higher bar. It’s not foolproof though.
I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.
Under your situation, the best idea is to simply have a wiped device. A Chromebook, for example, allows you to login with whatever credentials you choose, including a near empty profile
> I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.
this isn't a very useful way to think about it.
they can definitely search your luggage, obviously, but the border guards/immigration officials/random law enforcement people hanging around/etc can also just deny non-citizens entry to a country, usually for any or no reason.
there's documented cases of Australia[0] demanding to search phones of even citizens entering the country, and the US CBP explicitly states they may deny entry for non citizens if you don't give them the password and while they can't deny entry to citizens, they state they may seize the device then do whatever they want to it[1].
Or, with GrapheneOS, you give them the duress password, on the understanding that you will have to set the device up from scratch IF you ever see it again.
No. This is a myth, and while it does force you to enter your password instead of using biometrics on the next unlock, it is not the same as returning to BFU.
Great writeup, but I wonder why so much emphasis is put on not 'connected to network' part. It seems like a timed inactivity reboot is a simpler idea than any type of inter-device communication schemes. It's not new either; Grapheneos had this for a while now and the default is 18 hours (and you can set it to 10 minutes) which would be a lot more effective as a countermeasure against data exfiltration tools.
This is because earlier reports coming out of law enforcement agencies suggested that the network was involved in making even older devices reboot. This blog post is an effort to debunk that claim.
If you’re targeting these evidence grabbing/device exploiting mobs, generally the phones get locked into a faraday cage to drop the mobile network so that they can’t receive a remote wipe request from iCloud.
1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).
> it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
I think the reason is to make sure anything from RAM is wiped completely clean. Things like the password should be stored in the Secure Enclave (which encryption keys stored in RAM are derived from) but a reboot would wipe that too + any other sensitive data that might be still in memory.
As an extra bonus, I suppose iOS does integrity checks on boot too, so could be a way to trigger that also. Seems to me like a reboot is a "better safe than sorry" approach which isn't that bad approach.
Reboots don't typically wipe RAM. Although wiping ram is relatively easy if you are early enough in the boot process (or late enough in the shutdown process).
With ASLR and tons of activity happening during the boot process, it's almost guaranteed that you'll damage the keys you need. Plus, we don't know how shutdown processes are done. It might be wiping the keys clean before resetting the processor.
I'd expect that the RAM encryption key is regenerated each boot, so the RAM should be effectively wiped when the key from the previous boot is deleted from the memory controller.
The short answer to your last two questions is that “before first unlock” is a different state from requiring the PIN/passcode. On boot, the decryption keys for user profile data are not in memory, and aren’t available until they’re accessed from the security coprocessor via user input. The specifics depend on the device, but for Pixel devices running GrapheneOS you can get the gist of it here: https://grapheneos.org/faq#encryption
The important distinction is that, before you unlock your phone for the first time, there are no processes with access to your data. Afterwards, there are, even if you’re prompted for the full credentials to unlock, so an exploit could still shell the OS and, with privilege escalation, access your data.
Before first unlock, even a full device compromise does nothing, since all the keys are on the <flavor of security chip> and inaccessible without the PIN.
> If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
1. Getting there reliably can be hard (see the age-old discussions about zero-downtime OS updates vs rebooting), even more so if you must assume malware may be present on the system (how can you know that all that’s running is what you want to be running if you cannot trust the OS to tell you what processes are running?)
2. It may be faster to just reboot than to carefully bring back stuff.
> Why can't it just go into this state without rebooting?
Because the state of the phone isn't clean - there is information in RAM, including executing programs that will be sad if the disk volume their open files are stored on goes away.
If your goal is to get to the same secure state the phone is in when it first starts, why not just soft reboot?
> 1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
I wonder if this explains why the older iPhone I keep mounted to my monitor to use as a webcam keeps refusing to be a webcam so often lately and needing me to unlock it with my password...
Not a phone, but at my old apartment I used to have an iPad mounted on the wall. It was a dynamic weather display, Ring doorbell answerer, multimedia control, etc. Would suck if every 3 days I had to enter my passcode again.
I haven’t tested this, but I assume this wouldn’t occur if the device is fully unlocked and powered on. Most kiosk adjacent deployments are setup so that they never turn the screen off and remain unlocked.
It is very different as the cryptography systems can only assure a secure state with a known root of trust path to the state it is in.
The big issue with most platforms out there (x86, multi-vendor, IBVs etc.) is you can't actually trust what your partners deliver. So the guarantee or delta between what's in your TEE/SGX is a lot messier than when you're apple and you have the SoC, SEP, iBoot stages and kernel all measured and assured to levels only a vertical manufacturer could know.
Most devices/companies/bundles just assume it kinda sucks and give up (TCG Optal, TPM, BitLocker: looking at you!) and make most actual secure methods optional so the bottom line doesn't get hit.
That means (for Android phones) your baseband and application processor, boot rom and boot loader might all be from different vendors with different levels of quality and maturity, and for most product lifecycles and brand reputation/trust/confidence, it mostly just needs to not get breached in the first year it's on the market and look somewhat good on the surface for the remaining 1 to 2 years while it's supported.
Google is of course trying hard to make the ecosystem hardened, secure and maintainable (it has been feasible to get a lot of patches in without having to wait for manufacturers or telcos for extended periods of time), including some standards for FDE and in-AOSP security options, but in almost all retail cases it is ultimately an individual manufacturer of the SoC and of the integrated device to make it actually secure, and most don't since there is not a lot of ROI for them. Even Intel's SGX is somewhat of a clown show... Samsung does try to implement their own for example, I think KNOX is both the brand name for the software side as well as the hardware side, but I don't remember if that was strictly Exynos-only. The supply chain for UEFI Secure Boot has similar problems, especially with the PKI and rather large supply chain attack surface. But even if that wasn't such an issue, we still get "TEST BIOS DO NOT USE" firmware on production mainboards in retail. Security (and cryptography) is hard.
As for what the difference is in BFU/AFU etc. imagine it like: essentially some cryptographic material is no longer available to the live OS. Instead of hoping it gets cleared from all memory, it is a lot safer to assume it might be messed with by an attacker and drop all keys and reboot the device to a known disabled state. That way, without a user present, the SEP will not decrypt anything (and it would take a SEPROM exploit to start breaking in to the thing - nothing the OS could do about it, nor someone attacking the OS).
There is a compartmentalisation where some keys and keybags are dropped when locked, hard locked and BFU locked, the main differences between all of them is the amount of stuff that is still operational. It would suck if your phone would stop working as soon as you lock it (no more notifications, background tasks like email, messaging, no more music etc).
On the other hand, it might fine if everything that was running at the time of the lock-to-lockscreen keeps running, but no new crypto is allowed during the locked period. That means everything keeps working, but if an attacker were to try to access the container of an app that isn't open it wouldn't work, not because of some permissions, but because the keys aren't available and the means to get the keys is cryptographically locked.
That is where the main difference lies with more modern security, keys (or mostly, KEKs - key encryption keys) are a pretty strong guarantee that someone can only perform some action if they have the keys to do it. There are no permissions to bypass, no logic bugs to exploit, no 'service mode' that bypasses security. The bugs that remain would all be HSM-type bugs, but SEP edition (if that makes sense).
Apple has some sort of flowchart to see what possible states a device and the cryptographic systems can be in, and how the assurance for those states work. I don't have it bookmarked but IIRC it was presented at Black Hat a year or so ago, and it is published in the platform security guide.
> In law enforcement scenarios, a lot of the forensically relevant data is available in the AFU state. Law enforcement takes advantage of this and often keeps seized iPhones powered on, but isolated from the Internet, until they can extract data.
In Slovenia, devices have to be turned off the moment they are seized by their owner, prior to putting them into airplane mode.
I don't think people would be fine with being unable to power any electronic device down at need, even if they're not the owner.
It feels like something that needs to be as easy as possible, for safety reasons if not anything else.
Now what I'd like to see is an extension of their protocol that is used to locate iPhones that would also let them accept a "remote wipe" command, even when powered down.
You can definitely block AirPlane mode without a passcode on iOS. I disabled the access to the control center when the iPhone is locked. Therefore thieves won’t be able to do so.
Slight mitigation to this is you can add an automation via the Shortcuts app to be triggered when airplane mode is enabled, and set the actions to immediately lock your device and disable airplane mode
Downside is that you need to manually disable the automation if you actually wish to use airplane mode (and also remember to re-enable it when done)
Wouldn't really say Apple is pushing the envelope here, as covered in the previous threads about this topic, a number of Android flavors have done this long ago.
The power of defaults is not to be underestimated. Yes, you probably can do it with some Android distribution but the amount of people using that would be microscopic.
> Wouldn't really say Apple is pushing the envelope here
come on dude. they're doing it by default, for > billion people, with their army of lawyers sitting around waiting to defend lawsuits from shitty governments around the world.
If this is such a security benefit why not do it after 24 hours instead? How many people go that long without using their phones?
How many people are using their phones for some other purpose for which they want their phones to never reboot? And what are they actually doing with their phones?
I'm sure this is why but I had the same thought as GP. Under what circumstances would 24 hours be disruptive, but three days would be okay?
If you're using the iPhone as some type of IoT appliance, either time limit would be disruptive. But if you e.g. enable Guided Access, the phone will stay unlocked and so shouldn't reboot.
If you're using the iPhone as a phone, who the heck doesn't touch their phone in 24 hours? Maybe if you're on some phone-free camping trip and you just need the iPhone with you as an emergency backup—but in that case, I don't think Inactivity Reboot would be particularly disruptive.
How though? Users haven't used their phone in a day or more? How would they notice except for having to reenter their passcode which takes two seconds?
They have a long history of encrypting firmware. iBoot just stopped being decrypted recently with the launch of PCC, and prior to iOS 10 the kernel was encrypted too.
The operating theory is that higher management at Apple sees this as a layer of protection. However, word on the street is that members of actual security teams at Apple want it to be unencrypted for the sake of research/openness.
Great post. They talked about the possibility of iOS 18 wirelessly telling other phones to reboot, but then afaik didn’t address that again. Maybe they did and I missed it?
They conclude that there's no wireless component to the feature.
This feature is not at all related to wireless activity. The law enforcement document's conclusion that the reboot is due to phones wirelessly communicating with each other is implausible. The older iPhones before iOS 18 likely rebooted due to another reason, such as a software bug.
If you think about it, if the attacker is sophisticated enough to break the phone within a 72 hour window, then they are definitely sophisticated enough to use a faraday container. So communication between phones wouldn't help very much.
Moreover, you'd have to have some inhibitory signal to prevent everybody's phones restarting in a crowded environment, but any such signal could be spoofed.
This may explain why since iOS18 my device randomly reboots (albeit only takes max 5 seconds). I am a daily user so perhaps the reboot I experience is a bug.
That might be what's informally called a "respring", where the SpringBoard process is restarted.
SpringBoard is the process that shows the home screen, and does part of the lifecycle management for regular user apps. (i.e. if you tap an icon, it launches the app, if you swipe it away in the app switcher, it closes the app)
It is restarted to make certain changes take effect, like the system language. In the jailbreaking days, it was also restarted to make certain tweaks take effect. Of course, it can also just crash for some reason (which is likely what is happening to you)
Hi, is there some further info on iOS "internals" like this? I was always interested in how it works, but I found much less information compared to android (which obviously makes sense given one is more or less open-source), even though these probably don't fall in the secret category.
Because this way, the delay is parameterized within the Secure Enclave firmware by hard-coding it, which is a thing that only Apple can do.
If you were to allow a user to change it, you'd have to safeguard the channel by which the users' desired delay gets pushed into the SE against malicious use, which is inherently hard because that channel must be writable by the user. Therefore it opens up another attack surface by which the inactivity reboot feature itself might be attacked: if the thief could use an AFU exploit to tell the SE to only trigger the reboot after 300 days, the entire feature becomes useless.
It's not impossible to secure this - after all, changing the login credentials is such a critical channel as well - but it increases the cost to implement this feature significantly, and I can totally see the discussions around this feature coming to the conclusion that a sane, unchangeable default would be the better trade-off here.
Apple's whole thing is offering whatever they think is a good default over configuration. I can't even begin to count all the things I wish were configurable on iOS and macOS, but aren't. Makes for a smooth user experience, sure, but is also frustrating if you're a power user.
Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
However, considering Apple’s excellent track record on these kind of security measures, I would not at all be surprised to find out that a next generation iPhone would involve the SEP forcing a reboot without the kernels involvement.
what this does is that it reduces the window (to three days) of time between when an iOS device is captured, and a usable* kernel exploit is developed.
* there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched. If you have a captured phone used in a, for example, low stakes insurance fraud case, it’s not at all worth revealing your ownership of a kernel exploit.
Once an exploit is “burned”, they distribute them out to agencies and all affected devices are unlocked at once. This now means that kernel exploits must be deployed within three days, and it’s going to preserve the privacy of a lot of people.
In GrapheneOS, you can set it to as little as 10 minutes, with the default being 18 hours. That would be a lot more effective for this type of data exfiltration scenario.
You clearly haven't tried it or even googled it - because it's impossible to do it unattended. A dialog pops up (and only when unlocked) asking you to confirm the reboot. It's probably because they were worried users might end up in a constant reboot/shutdown cycle, though presumably they could just implement a "if rebooted in the last hour by a script, don't allow it again" rule.
Or to disable it entirely.
Someone could set up and ipad to do something always plugged in, would be bloody annoying to have it locked cold every three days.
I’m not sure, but I wouldn’t expect the inactivity timeout to trigger if the device was already in an unlocked state (if I understand the feature correctly) so in kiosk mode or with the auto screen lock turned off and an app open I wouldn’t expect it to happen.
Having to put your passcode in every three days is not the end of the world. It would make sense also that if you turned off the passcode entirely it also wouldn’t restart.
> With Screen Time, you can turn on Content & Privacy Restrictions to manage content, apps, and settings on your child's device. You can also restrict explicit content, purchases and downloads, and changes to privacy settings.
Conspiracy theory time! Apple puts this out there to break iPad-based DIY home control panels because they're about to release a product that would compete with them.
> Apple puts this out there to break iPad-based DIY home control panels
If you were using an iPad as a home control panel, you'd probably disable the passcode on it entirely - and I believe that'd disable the inactivity reboot as well.
That's only because the kernel tells the userland to reboot. If the kernel is compromised, they can stop it from telling userland to reboot and stop the kernel panicing.
Kernel exploits would let someone bypass the lockscreen and access all the data they want immediately, unless I'm missing something. Why would you even need to disable the reboot timer in this case?
Hypotdthically, I suppose there's value in disabling the timer if you're, for example, waiting for a SEP exploit that only works if in an AFU state?
But, I don't know where the idea of disabling a reboot timer came in? I'm only simply saying that now, you have to have a kernel exploit on hand, or expect to have one within three days - a very tall order indeed.
> * there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched.
There's literally emails from police investigators spreading word about the reboots, which state that the device goes from them being able to extract data while in AFU, to them not being able to get anything out of the device in BFU state.
It's a bit pointless, IMHO. All cops will do is make sure they have a search warrant lined up to start AFU extraction right away, or submit warrant requests with urgent/emergency status.
I sort addressed this in my original comment but local police likely do not have access to an AFU vuln, and generally get it after it’s been patched. Then, they go on an unlocking spree. This prevents that
> Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
True. I wonder if they've considered the SEP taking a more active role in filesystem decryption. If the kernel had to be reauthenticated periodically (think oauth's refresh token) maybe SEP could stop data exfiltration after the expiry even without a reboot.
Maybe it would be too much of a bottleneck; interesting to think about though.
> If the kernel had to be reauthenticated periodically (think oauth's refresh token)
If the kernel is compromised, this is pointless I think. You could just "fake it".
SEP is already very active in filesystem encryption. The real important thing is evicting all sensitive information from memory. Reboot is the simplest and most effective, and the end result is the same.
We actually do know, it cannot directly*. What it could do is functionally disable RAM, but that would basically cause the phone to hard lock and even cause data corruption in some limited cases.
This is still being actively researched. I have no evidence, but would not be surprised to find out that a SEP update has been pushed that causes it to pull RAM keys after the kernel panic window has closed.
* This may have been changed since the last major writeup came out for the iPhone 11.
If I were looking for low hanging fruit, I suspect it wouldn’t reboot if you were to replicate the user’s home WiFi environment in the faraday cage, sans internet connection of course. Or repeatedly initializing the camera from the lock screen.
> Turns out, the inactivity reboot triggers exactly after 3 days (72 hours). The iPhone would do so despite being connected to Wi-Fi. This confirms my suspicion that this feature had nothing to do with wireless connectivity.
How do these things work with devices inside a NAT gateway? Most of our devices are inside a LAN. Even if a server gets started, it won't be visible to the outside world, unless we play with the modem settings.
Now, a hacker/state who has penetrated a device can do an upload of data from the local decice to a CNC server.
But that seems risky as you need to do it again and again. Or do they just get into your device once and upload everything to CNC?
>In the After First Unlock (AFU) state, user data is decrypted
Note that this is a slight simplification because, I assume, the reality is irrelevant to understanding the topic:
There are a few different keys [0] that can be chosen at this level of the encryption pipeline. The default one makes data available after first unlock, as described. But, as the developer, you can choose a key that, for example, makes your app's data unavailable any time the device is locked. Apple uses that one for the user's health data, and maybe other extra-sensitive stuff.
[0]: https://support.apple.com/guide/security/data-protection-cla...
How useful do you think this is in practice? Wouldn’t it rely on app-level memory scrubbing and page clearing and such as well, if you wanted to truly make sure it’s unavailable? Do Apple offer APIs to assist there?
To me the biggest takeaway is that Apple is sufficiently paranoid to add this feature. Some people (like John Gruber) advocate for activating bio lockout at the border by squeezing the volume and power buttons. I would say if you’re the type of person who would do this, you should go one step further and power off.
Similarly, if you’re in a situation where you cannot guarantee your phone’s security because it’s leaving your possession, and you’re sufficiently worried, again, power off fully.
> I would say if you’re the type of person who would do this, you should go one step further and power off.
I'd travel with a different device, honestly. I can get a new-in-box android device for under £60 from a shop, travel with that, set it up properly on the other side, and then either leave it behind or wipe it again.
What do you do if you’re at the border and they demand both the physical device and the password?
Let’s assume “get back on the plane and leave” is not a viable option.
GrapheneOS duress password [1] and user profiles [2] are quite solid solutions for this scenario
[1] https://grapheneos.org/features#duress
[2] https://grapheneos.org/features#improved-user-profiles
From the link:
> GrapheneOS provides users with the ability to set a duress PIN/Password that will irreversibly wipe the device (along with any installed eSIMs) once entered anywhere where the device credentials are requested (on the lockscreen, along with any such prompt in the OS).
In a border interrogation scenario, isn't that just likely to get you arrested for destroying evidence?
That’s a significantly higher bar. It’s not foolproof though.
I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.
Under your situation, the best idea is to simply have a wiped device. A Chromebook, for example, allows you to login with whatever credentials you choose, including a near empty profile
> I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.
this isn't a very useful way to think about it.
they can definitely search your luggage, obviously, but the border guards/immigration officials/random law enforcement people hanging around/etc can also just deny non-citizens entry to a country, usually for any or no reason.
there's documented cases of Australia[0] demanding to search phones of even citizens entering the country, and the US CBP explicitly states they may deny entry for non citizens if you don't give them the password and while they can't deny entry to citizens, they state they may seize the device then do whatever they want to it[1].
0: https://www.theguardian.com/world/2022/jan/18/returning-trav...
1: https://www.cbp.gov/travel/cbp-search-authority/border-searc...
You say no.
Or, with GrapheneOS, you give them the duress password, on the understanding that you will have to set the device up from scratch IF you ever see it again.
Burner phone
Also, lockdown mode and pair locking your device. Pair locking iirc is how you protect against cellubrite type attacks
Doesn't the volume+power gesture transition into BFU, i.e. be equivalent to power-cycling?
No. This is a myth, and while it does force you to enter your password instead of using biometrics on the next unlock, it is not the same as returning to BFU.
Great writeup, but I wonder why so much emphasis is put on not 'connected to network' part. It seems like a timed inactivity reboot is a simpler idea than any type of inter-device communication schemes. It's not new either; Grapheneos had this for a while now and the default is 18 hours (and you can set it to 10 minutes) which would be a lot more effective as a countermeasure against data exfiltration tools.
This is because earlier reports coming out of law enforcement agencies suggested that the network was involved in making even older devices reboot. This blog post is an effort to debunk that claim.
If you’re targeting these evidence grabbing/device exploiting mobs, generally the phones get locked into a faraday cage to drop the mobile network so that they can’t receive a remote wipe request from iCloud.
Two questions:
1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).
> it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
I think the reason is to make sure anything from RAM is wiped completely clean. Things like the password should be stored in the Secure Enclave (which encryption keys stored in RAM are derived from) but a reboot would wipe that too + any other sensitive data that might be still in memory.
As an extra bonus, I suppose iOS does integrity checks on boot too, so could be a way to trigger that also. Seems to me like a reboot is a "better safe than sorry" approach which isn't that bad approach.
Reboots don't typically wipe RAM. Although wiping ram is relatively easy if you are early enough in the boot process (or late enough in the shutdown process).
With ASLR and tons of activity happening during the boot process, it's almost guaranteed that you'll damage the keys you need. Plus, we don't know how shutdown processes are done. It might be wiping the keys clean before resetting the processor.
I'd expect that the RAM encryption key is regenerated each boot, so the RAM should be effectively wiped when the key from the previous boot is deleted from the memory controller.
The short answer to your last two questions is that “before first unlock” is a different state from requiring the PIN/passcode. On boot, the decryption keys for user profile data are not in memory, and aren’t available until they’re accessed from the security coprocessor via user input. The specifics depend on the device, but for Pixel devices running GrapheneOS you can get the gist of it here: https://grapheneos.org/faq#encryption
The important distinction is that, before you unlock your phone for the first time, there are no processes with access to your data. Afterwards, there are, even if you’re prompted for the full credentials to unlock, so an exploit could still shell the OS and, with privilege escalation, access your data.
Before first unlock, even a full device compromise does nothing, since all the keys are on the <flavor of security chip> and inaccessible without the PIN.
> If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
1. Getting there reliably can be hard (see the age-old discussions about zero-downtime OS updates vs rebooting), even more so if you must assume malware may be present on the system (how can you know that all that’s running is what you want to be running if you cannot trust the OS to tell you what processes are running?)
2. It may be faster to just reboot than to carefully bring back stuff.
> Why can't it just go into this state without rebooting?
Because the state of the phone isn't clean - there is information in RAM, including executing programs that will be sad if the disk volume their open files are stored on goes away.
If your goal is to get to the same secure state the phone is in when it first starts, why not just soft reboot?
this also clears out deeper OS rootkits if they could not achieve reboot persistence, which is not uncommon.
> 1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
I wonder if this explains why the older iPhone I keep mounted to my monitor to use as a webcam keeps refusing to be a webcam so often lately and needing me to unlock it with my password...
I have the same setup and what works for me is putting the phone into Supervised mode using the Apple Configurator.
From there, you can enable single app mode to lock it into the app you're using for the webcam (I use Camo).
What legit use case involves not touching your phone at all for 3 days?
Not a phone, but at my old apartment I used to have an iPad mounted on the wall. It was a dynamic weather display, Ring doorbell answerer, multimedia control, etc. Would suck if every 3 days I had to enter my passcode again.
I haven’t tested this, but I assume this wouldn’t occur if the device is fully unlocked and powered on. Most kiosk adjacent deployments are setup so that they never turn the screen off and remain unlocked.
Maybe you want people to be able to reach you on a secondary, inbound-only phone number.
I’ve also heard people re-purpose old phones (with their batteries disconnected, hopefully) as tiny home servers or informational displays.
It is very different as the cryptography systems can only assure a secure state with a known root of trust path to the state it is in.
The big issue with most platforms out there (x86, multi-vendor, IBVs etc.) is you can't actually trust what your partners deliver. So the guarantee or delta between what's in your TEE/SGX is a lot messier than when you're apple and you have the SoC, SEP, iBoot stages and kernel all measured and assured to levels only a vertical manufacturer could know.
Most devices/companies/bundles just assume it kinda sucks and give up (TCG Optal, TPM, BitLocker: looking at you!) and make most actual secure methods optional so the bottom line doesn't get hit.
That means (for Android phones) your baseband and application processor, boot rom and boot loader might all be from different vendors with different levels of quality and maturity, and for most product lifecycles and brand reputation/trust/confidence, it mostly just needs to not get breached in the first year it's on the market and look somewhat good on the surface for the remaining 1 to 2 years while it's supported.
Google is of course trying hard to make the ecosystem hardened, secure and maintainable (it has been feasible to get a lot of patches in without having to wait for manufacturers or telcos for extended periods of time), including some standards for FDE and in-AOSP security options, but in almost all retail cases it is ultimately an individual manufacturer of the SoC and of the integrated device to make it actually secure, and most don't since there is not a lot of ROI for them. Even Intel's SGX is somewhat of a clown show... Samsung does try to implement their own for example, I think KNOX is both the brand name for the software side as well as the hardware side, but I don't remember if that was strictly Exynos-only. The supply chain for UEFI Secure Boot has similar problems, especially with the PKI and rather large supply chain attack surface. But even if that wasn't such an issue, we still get "TEST BIOS DO NOT USE" firmware on production mainboards in retail. Security (and cryptography) is hard.
As for what the difference is in BFU/AFU etc. imagine it like: essentially some cryptographic material is no longer available to the live OS. Instead of hoping it gets cleared from all memory, it is a lot safer to assume it might be messed with by an attacker and drop all keys and reboot the device to a known disabled state. That way, without a user present, the SEP will not decrypt anything (and it would take a SEPROM exploit to start breaking in to the thing - nothing the OS could do about it, nor someone attacking the OS).
There is a compartmentalisation where some keys and keybags are dropped when locked, hard locked and BFU locked, the main differences between all of them is the amount of stuff that is still operational. It would suck if your phone would stop working as soon as you lock it (no more notifications, background tasks like email, messaging, no more music etc).
On the other hand, it might fine if everything that was running at the time of the lock-to-lockscreen keeps running, but no new crypto is allowed during the locked period. That means everything keeps working, but if an attacker were to try to access the container of an app that isn't open it wouldn't work, not because of some permissions, but because the keys aren't available and the means to get the keys is cryptographically locked.
That is where the main difference lies with more modern security, keys (or mostly, KEKs - key encryption keys) are a pretty strong guarantee that someone can only perform some action if they have the keys to do it. There are no permissions to bypass, no logic bugs to exploit, no 'service mode' that bypasses security. The bugs that remain would all be HSM-type bugs, but SEP edition (if that makes sense).
Apple has some sort of flowchart to see what possible states a device and the cryptographic systems can be in, and how the assurance for those states work. I don't have it bookmarked but IIRC it was presented at Black Hat a year or so ago, and it is published in the platform security guide.
> In law enforcement scenarios, a lot of the forensically relevant data is available in the AFU state. Law enforcement takes advantage of this and often keeps seized iPhones powered on, but isolated from the Internet, until they can extract data.
In Slovenia, devices have to be turned off the moment they are seized by their owner, prior to putting them into airplane mode.
Also when thieves or muggers rob someone, the first thing they do is turn on Airplane Mode or force power-off.
WHY the hell don't those actions require a passcode or bio authentication??
I don't think people would be fine with being unable to power any electronic device down at need, even if they're not the owner.
It feels like something that needs to be as easy as possible, for safety reasons if not anything else.
Now what I'd like to see is an extension of their protocol that is used to locate iPhones that would also let them accept a "remote wipe" command, even when powered down.
They could just put it in a foil-lined pocket instead.
You need to be able to forcibly power off the phone when it's frozen.
You can definitely block AirPlane mode without a passcode on iOS. I disabled the access to the control center when the iPhone is locked. Therefore thieves won’t be able to do so.
This doesn't work if they steal it out your hand while it's unlocked.
Slight mitigation to this is you can add an automation via the Shortcuts app to be triggered when airplane mode is enabled, and set the actions to immediately lock your device and disable airplane mode
Downside is that you need to manually disable the automation if you actually wish to use airplane mode (and also remember to re-enable it when done)
Great writeup! And it's good to see Apple pushing the envelope on device security.
Wouldn't really say Apple is pushing the envelope here, as covered in the previous threads about this topic, a number of Android flavors have done this long ago.
The power of defaults is not to be underestimated. Yes, you probably can do it with some Android distribution but the amount of people using that would be microscopic.
> Wouldn't really say Apple is pushing the envelope here
come on dude. they're doing it by default, for > billion people, with their army of lawyers sitting around waiting to defend lawsuits from shitty governments around the world.
If this is such a security benefit why not do it after 24 hours instead? How many people go that long without using their phones?
How many people are using their phones for some other purpose for which they want their phones to never reboot? And what are they actually doing with their phones?
Because it harms the user experience.
I'm sure this is why but I had the same thought as GP. Under what circumstances would 24 hours be disruptive, but three days would be okay?
If you're using the iPhone as some type of IoT appliance, either time limit would be disruptive. But if you e.g. enable Guided Access, the phone will stay unlocked and so shouldn't reboot.
If you're using the iPhone as a phone, who the heck doesn't touch their phone in 24 hours? Maybe if you're on some phone-free camping trip and you just need the iPhone with you as an emergency backup—but in that case, I don't think Inactivity Reboot would be particularly disruptive.
Maybe Apple will lower the window over time?
How though? Users haven't used their phone in a day or more? How would they notice except for having to reenter their passcode which takes two seconds?
Not being able to glance at any push notifications or get incoming caller ID would be pretty disruptive.
Read the introduction.
https://www.devex.com/people/delta-airlines-telefono-como-ll...
Does anyone have insight into why Apple encrypts SEP firmware? Clearly it’s not critical to their security model so maybe just for IP protection?
They have a long history of encrypting firmware. iBoot just stopped being decrypted recently with the launch of PCC, and prior to iOS 10 the kernel was encrypted too.
The operating theory is that higher management at Apple sees this as a layer of protection. However, word on the street is that members of actual security teams at Apple want it to be unencrypted for the sake of research/openness.
Someone high up is an idiot presumably
Great post. They talked about the possibility of iOS 18 wirelessly telling other phones to reboot, but then afaik didn’t address that again. Maybe they did and I missed it?
They conclude that there's no wireless component to the feature.
This feature is not at all related to wireless activity. The law enforcement document's conclusion that the reboot is due to phones wirelessly communicating with each other is implausible. The older iPhones before iOS 18 likely rebooted due to another reason, such as a software bug.
If you think about it, if the attacker is sophisticated enough to break the phone within a 72 hour window, then they are definitely sophisticated enough to use a faraday container. So communication between phones wouldn't help very much.
Moreover, you'd have to have some inhibitory signal to prevent everybody's phones restarting in a crowded environment, but any such signal could be spoofed.
This may explain why since iOS18 my device randomly reboots (albeit only takes max 5 seconds). I am a daily user so perhaps the reboot I experience is a bug.
I always assumed that it was memory reaching capacity or routine cleanup more than a reboot. This often happened to me after intensive use
If it takes only 5 seconds, it doesn’t sound like a reboot. Does it show a black screen and the apple logo during this event?
No Apple logo, just black screen with loading spinner followed by requiring passcode to unlock
That might be what's informally called a "respring", where the SpringBoard process is restarted.
SpringBoard is the process that shows the home screen, and does part of the lifecycle management for regular user apps. (i.e. if you tap an icon, it launches the app, if you swipe it away in the app switcher, it closes the app)
It is restarted to make certain changes take effect, like the system language. In the jailbreaking days, it was also restarted to make certain tweaks take effect. Of course, it can also just crash for some reason (which is likely what is happening to you)
Hi, is there some further info on iOS "internals" like this? I was always interested in how it works, but I found much less information compared to android (which obviously makes sense given one is more or less open-source), even though these probably don't fall in the secret category.
mine used to do that when the battery needed replacement
Yes, lots of complaints on forums about this bug. Saw it happen to my phone today.
I haven't read the whole thing, but from skimming the beginning. This is pretty similar how AOSP's BFU vs AFU unlock works.
thank you for such a great writeup, this is an excellent breakdown!
My question is: why three days specifically instead of a user-configurable delay?
Because this way, the delay is parameterized within the Secure Enclave firmware by hard-coding it, which is a thing that only Apple can do.
If you were to allow a user to change it, you'd have to safeguard the channel by which the users' desired delay gets pushed into the SE against malicious use, which is inherently hard because that channel must be writable by the user. Therefore it opens up another attack surface by which the inactivity reboot feature itself might be attacked: if the thief could use an AFU exploit to tell the SE to only trigger the reboot after 300 days, the entire feature becomes useless.
It's not impossible to secure this - after all, changing the login credentials is such a critical channel as well - but it increases the cost to implement this feature significantly, and I can totally see the discussions around this feature coming to the conclusion that a sane, unchangeable default would be the better trade-off here.
Apple's whole thing is offering whatever they think is a good default over configuration. I can't even begin to count all the things I wish were configurable on iOS and macOS, but aren't. Makes for a smooth user experience, sure, but is also frustrating if you're a power user.
More security theatre.
Elaborate.
I suspected this was being managed in the Secure Enclave.
That means it's going to be extremely difficult to disable this even if iOS is fully compromised.
If I’m reading this right:
Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
However, considering Apple’s excellent track record on these kind of security measures, I would not at all be surprised to find out that a next generation iPhone would involve the SEP forcing a reboot without the kernels involvement.
what this does is that it reduces the window (to three days) of time between when an iOS device is captured, and a usable* kernel exploit is developed.
* there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched. If you have a captured phone used in a, for example, low stakes insurance fraud case, it’s not at all worth revealing your ownership of a kernel exploit.
Once an exploit is “burned”, they distribute them out to agencies and all affected devices are unlocked at once. This now means that kernel exploits must be deployed within three days, and it’s going to preserve the privacy of a lot of people.
Would be nice if Apple would expose an option to set the timer to a shorter window, but still great work.
In GrapheneOS, you can set it to as little as 10 minutes, with the default being 18 hours. That would be a lot more effective for this type of data exfiltration scenario.
You can do this yourself with Shortcuts app.
Create a timer function to run a shutdown on a time interval you order. Change shutdown to "restart".
You clearly haven't tried it or even googled it - because it's impossible to do it unattended. A dialog pops up (and only when unlocked) asking you to confirm the reboot. It's probably because they were worried users might end up in a constant reboot/shutdown cycle, though presumably they could just implement a "if rebooted in the last hour by a script, don't allow it again" rule.
Or to disable it entirely. Someone could set up and ipad to do something always plugged in, would be bloody annoying to have it locked cold every three days.
I’m not sure, but I wouldn’t expect the inactivity timeout to trigger if the device was already in an unlocked state (if I understand the feature correctly) so in kiosk mode or with the auto screen lock turned off and an app open I wouldn’t expect it to happen.
Maybe you want it locked and only showing notification headers.
Having to put your passcode in every three days is not the end of the world. It would make sense also that if you turned off the passcode entirely it also wouldn’t restart.
I'd rather have a dedicated Kiosk mode that has a profile of allow-listed applications and one or more that are auto-started.
Maybe one or two of these will do what you want?
https://support.apple.com/en-us/105121
> With Screen Time, you can turn on Content & Privacy Restrictions to manage content, apps, and settings on your child's device. You can also restrict explicit content, purchases and downloads, and changes to privacy settings.
https://support.apple.com/en-us/111795
> Guided Access limits your device to a single app and lets you control which features are available.
Or "single-app mode", which is a more tightly focused kiosk mode:
https://support.apple.com/guide/apple-configurator-mac/start...
Conspiracy theory time! Apple puts this out there to break iPad-based DIY home control panels because they're about to release a product that would compete with them.
> Apple puts this out there to break iPad-based DIY home control panels
If you were using an iPad as a home control panel, you'd probably disable the passcode on it entirely - and I believe that'd disable the inactivity reboot as well.
You could also set the auto-lock in display settings to never.
It’s more likely than you think!
> Apple's Next Device Is an AI Wall Tablet for Home Control, Siri and Video Calls
https://news.ycombinator.com/item?id=42119559
via
> Apple's Tim Cook Has Ways to Cope with the Looming Trump Tariffs
https://news.ycombinator.com/item?id=42168808
If reboot doesn’t happen kernel panics, at least that’s what the article says.
That's only because the kernel tells the userland to reboot. If the kernel is compromised, they can stop it from telling userland to reboot and stop the kernel panicing.
Kernel exploits would let someone bypass the lockscreen and access all the data they want immediately, unless I'm missing something. Why would you even need to disable the reboot timer in this case?
Hypotdthically, I suppose there's value in disabling the timer if you're, for example, waiting for a SEP exploit that only works if in an AFU state?
But, I don't know where the idea of disabling a reboot timer came in? I'm only simply saying that now, you have to have a kernel exploit on hand, or expect to have one within three days - a very tall order indeed.
> * there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched.
There's literally emails from police investigators spreading word about the reboots, which state that the device goes from them being able to extract data while in AFU, to them not being able to get anything out of the device in BFU state.
It's a bit pointless, IMHO. All cops will do is make sure they have a search warrant lined up to start AFU extraction right away, or submit warrant requests with urgent/emergency status.
I sort addressed this in my original comment but local police likely do not have access to an AFU vuln, and generally get it after it’s been patched. Then, they go on an unlocking spree. This prevents that
> Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
True. I wonder if they've considered the SEP taking a more active role in filesystem decryption. If the kernel had to be reauthenticated periodically (think oauth's refresh token) maybe SEP could stop data exfiltration after the expiry even without a reboot.
Maybe it would be too much of a bottleneck; interesting to think about though.
> If the kernel had to be reauthenticated periodically (think oauth's refresh token)
If the kernel is compromised, this is pointless I think. You could just "fake it".
SEP is already very active in filesystem encryption. The real important thing is evicting all sensitive information from memory. Reboot is the simplest and most effective, and the end result is the same.
> Reboot is not enforced by the SEP, though, only requested
We (the public) do not know if SEP can control nRST of the main cores, but there is no reason to suspect that it cannot.
We actually do know, it cannot directly*. What it could do is functionally disable RAM, but that would basically cause the phone to hard lock and even cause data corruption in some limited cases.
This is still being actively researched. I have no evidence, but would not be surprised to find out that a SEP update has been pushed that causes it to pull RAM keys after the kernel panic window has closed.
* This may have been changed since the last major writeup came out for the iPhone 11.
If I were looking for low hanging fruit, I suspect it wouldn’t reboot if you were to replicate the user’s home WiFi environment in the faraday cage, sans internet connection of course. Or repeatedly initializing the camera from the lock screen.
From the article:
> Turns out, the inactivity reboot triggers exactly after 3 days (72 hours). The iPhone would do so despite being connected to Wi-Fi. This confirms my suspicion that this feature had nothing to do with wireless connectivity.
How do these things work with devices inside a NAT gateway? Most of our devices are inside a LAN. Even if a server gets started, it won't be visible to the outside world, unless we play with the modem settings.
Now, a hacker/state who has penetrated a device can do an upload of data from the local decice to a CNC server.
But that seems risky as you need to do it again and again. Or do they just get into your device once and upload everything to CNC?
This particular feature doesn’t rely on network connectivity or lack thereof.
Here’s some info about how some spyware works:
https://www.kaspersky.com/blog/commercial-spyware/50813/
What are you even talking about?