Categories
Apple User Help

How do iOS devices perform on-device machine learning on photos while most of the photos are uploaded to iCloud?

When iCloud photo is enabled, all photos are uploaded to iCloud and only a low-resolution copy is saved on the phone for browsing, the high res photo is download on-demand if the user clicks it.

Photos app will perform on-device machine learning when the device is not in use and is charging(typically while you’re sleeping) to detect faces in your photo library so that you have “people albums”. This is just one case that the app is using ML to achieve.

What confuses me is that the low res images are just not good enough to perform ML on, especially for face detection. So how can it analyze those low-res photos and giving the correct prediction?

I have two guesses:

  • Don’t care low or high res, just analyze what is on the device. (I don’t really believe this is the case as the face detection is too accurate to be believed that they were a result of ML on 480p photos)

  • On-demand loading, download the original photo from iCloud while the ML algorithm is running, and delete the original photo from the device when it is done. (Isn’t this a bandwidth waste for iCloud servers?)

Leave a Reply

Your email address will not be published. Required fields are marked *