Inno Yolo
  • Home
  • Why Inno Yolo
  • About the Founder
  • Contact
  • Impressum | Imprint
  • Datenschutz | Data Privacy
  • Home
  • Why Inno Yolo
  • About the Founder
  • Contact
  • Impressum | Imprint
  • Datenschutz | Data Privacy
Search by typing & pressing enter

YOUR CART

"I write what I like" - Steve Bantu Biko

3/7/2024 0 Comments

Why The Announcement Around Apple Intelligence has Ruffled Feathers

Picture
Image generated using Bing CoPilot Designer 
​On 10 June 2024, Apple announced that it is bringing Apple Intelligence directly to our apple devices. Apple Intelligence is tipped to leverage generative models to bring a smart, context aware personal assistant to every apple device. Now, you might be asking at this point - isn’t that a description of Siri? If you use Siri however, you’ll know that the difference between Siri and Apple Intelligence is probably the difference between a letter of intent in the business world and an actual binding contract (maybe that’s an expression of hope on my part on the latter). Siri has trailed behind compared to other voice assistants like Alexa and the Google Assistant. An attempt to address the shortcomings of Siri, might actually be the exact reason why the announcement of Apple Intelligence has ruffled some feathers. 
 
Apple has in the past stood as a stellar example that you can respect users’ privacy and at the same time be one of the most successful companies in the world (assessed using a commercial lens). The pressure on Apple has been intense for some time though, with the most recent example seen when Apple lost its spot as the world’s second most valuable company to Nvidia when Nividia added US $150 billion to its market cap. Nvidia’s gains in the past year have been attributed to the explosion in demand for its artificial intelligence chips resulting from the generative AI boom. Indeed, you’ll find that there’s hardly anywhere you “go” in the digital universe without being offered some new generative AI feature or getting exposed to the products on those tools (case in point is the above image). 
 
The market has been eagerly waiting on Apple to respond, perhaps more eagerly than Apple users (at least in certain quarters). Indeed, on 13 June, 3 days after the Apple Intelligence launch, Apple rose past Nvidia and displaced Microsoft as the most valuable company in the world. The battle of the tech titans continued throughout June with these three companies trading places – most recently starting with the most valuable companies, we have Microsoft, Apple, Nvidia, Google and Amazon occupying the top 5 spots.
 
So why has Apple’s launch of Apple Intelligence been so controversial and what do data privacy together with AI Ethics have to do with it?
 
Most of Apple’s products are privacy respecting because Apple deploys their machine learning and generative artificial intelligence models locally on devices using a federated learning approach that is combined with trained model sharing to allow for user data to stay under the control of users, rather than transfer user data to their servers for on-server processing as is commonplace with other technology providers. 
 
Brrrr…say what now? 
Ok, let’s take a step back…
 
If you’re a smart phone user, you probably take photos regularly. Irrespective of which smart phone you use, you regularly get specially curated videos showing you special memories and moments, e.g. from a recent holiday or other special moments like family dinners, complete with background music that speaks to that video. These videos are compiled by artificial intelligence models that are trained to:
  1. identify, collating and sequencing different images in your image gallery based on the probability that they can be associated to  a specific event with positive sentimental value. The model analyses people and places, dates, locations, activities, biometrics, facial expressions and natural phenomena and other markers in images etc. to categorise and connect the images;
  2. create a title that most likely speaks to said sentimental event, e.g. birthday celebrations, journeys or special moments with (assumed*) family members; and
  3. Add music to the video image collage and present it to you. 

*assuming you haven't provided this information in some form or another. 
 
With other tech giants, the artificial intelligence model is deployed and managed centrally on servers owned and operated by that company. As a result, for your pictures to be used by the model on those servers, your data must be transferred onto those servers for centralised processing. What’s made Apple different so far is that instead of bringing your data to a central point, they’ve deployed their model onto your device thus removing the need for them to transfer your data to centralized servers for processing. This eliminates the data privacy and security risks that arise from moving and storing user data on their serves. So, they’ve been able to provide you with the same special moments videos while respecting your privacy and leaving your data less susceptible to breach. 
 
How they do this is as follows (we’ll get detailed and then lift it up again later): 
 
Apple designs, trains (centrally) and then deploys machine learning and generative models (read artificial intelligence models) onto your device (locally) where they leverage your data on-device to generate the AI model outputs you see, e.g. the family moments videos created and presented to you in the gallery app. This happens without Apple moving your data from your device elsewhere for processing, i.e. local processing or on-device processing. Key here is that your data remains on your device and therefore stays private. Apple doesn’t try to take ownership of it, use it to create a profile of you for marketing purposes, they also do not store it for later use. 

The locally deployed AI models use the outputs of the models to learn and improve. Model improvement values are generated. Only the improvement values are sent back to Apple so that the improvement values emanating from various devices can be aggregated and used to improve the applicable AI model centrally. The improved AI model is then redeployed post improvement. This represents a continuous deployment, learning and improvement cycle that allows AI models to learn and improve while user data is protected. 
 
Another reason why Apple has been a beacon of hope around respecting user privacy relates to the many different functions and features focused on ensuring that you can choose to enhance your privacy and mitigate against getting tracked by websites throughout the web. This ranges from the usual cookie blocking functionality to Apple’s private relay. Just yesterday, I was informed that an app from certain company with a firm position in the world Top 10 most valuable companies was trying to collect data from other apps and I was asked to decide if I wanted to permit this, which I didn’t and still don’t.
 
Apple users have come to trust that, although Apple may be imperfect, the company strives for a world where users are not the product and privacy is a given and not something we as users have to buy back. This is not the case with many technology giants. In fact, had it not been for legislation like GDPR, the Digital Service Act and now the EU AI Act, we’d probably be ill-informed about the number of vendors our data and behavioural patterns are sent to when we decide to use a website and accept their cookies. The same goes for many apps.
 
Back to Apple Intelligence. What’s got people irate is that a company that has built this level of trust and credibility around privacy and ethics in its handling of user data is now joining the AI rush, which involves using people’s work and data without their express permission and without engaging the topic of compensation.

To be fair, Apple has taken pains to highlight that they aim to respect data privacy regulations and that they’re still aligning to the principles they’re known for. However, it’s worth highlighting that: 
  • There’s now, and there likely will be in the future, a stronger emphasis on off-device processing since large generative models use a lot of processing power and energy to generate their outputs thereby making on-device processing challenging; 
  • To make room for running their generative models on the cloud, Apple has had to provide reassurance that Apple private cloud compute now extends the trusted security and privacy associated with their on-device processing to the cloud; 
  • They’ve had to specifically highlight measures that focus on ensuring that data that gets processed on their servers is “never exposed nor retained”. This clearly shows that they know they’re walking a tight rope and that there are real risks in undertaking this strategy; 
  • Adding to that they have committed to providing access to the code on their servers for inspection by independent experts so their users can be assured that nothing untoward is happening with their data and that they live up to their privacy and security promises; 
  • However, most controversial is the affirmation that Apple recently joined and will continue to be part of the tech movement to scrape and use data from the web to train their models without addressing the controversies around the ethics of using other people’s data without their express permission and without providing compensation for said use. It’s mentioned that web publishers can opt out…but until then? We all know that information takes a while to propagate so the scraping will continue while many lack awareness. This offer is probably not meant to apply retrospectively. Additionally, the ethics of opt-out vs. opt-in are not without controversy and are worthy of discussion; 
  • The other high controversy point has been the introduction of ChatGPT across Apple platforms. Whilst some may look positively upon this, there is some controversy around the OpenAI project which began as a non-profit project and has leveraged other people’s effort and IP without compensation now turning for profit - still without compensation. Additionally, it’s known that ChatGPT hallucinates and there have been examples where generative models have been gamed by those wanting to make a point. The level of corporate citizenship and responsibility around deploying ChatGPT without the corresponding user education falls below what we should be able to expect from Apple. There’s also enough there to ask the question of whether there is enough of an alignment in values and principles between Apple and OpenAI for them to work together without posing a challenge to Apple’s reputation for privacy and data protection. 

Alas, at the end of the day, Apple Intelligence has been launched and has received a positive market response. The horse has bolted. Apple has also tried to walk the tight rope between commercial and ethical interests. While in some areas there are real concerns, in other areas they have sound technical plans for continuing to protect their users. Time will tell just how effective said plans will be, but they've go much further than any other technology giant has. 

That said, these developments are definitely something to watch to see if Apple will be able to scale its artificial intelligence offerings without being seduced by the commercial gains experienced by competitors it once stood on the bastion of privacy to compete against. 

After all, for some of us, the security of our data and the commitment to data privacy has been why we've stayed within the Apple universe despite Siri's not so fabulous answers. Organisations need to respond to market forces, but they need to take care that they don't lose track of what the customers who pay for their products and drive actual return to shareholders expect from them.  
0 Comments



Leave a Reply.

    Archives

    July 2024
    June 2024
    January 2024
    December 2023

    Categories

    All

    RSS Feed

Inno Yolo

Home
Why Inno Yolo
About the Founder
​Blog

Support

Contact Us
Book an appointment

Legal

Impressum | Imprint
​

© COPYRIGHT 2025. ALL RIGHTS RESERVED.