top of page

Why Personal AI will change everything again

Hey there! Sorry for being Missed In Action, but my new book has totally consumed my time.

Anyway, are you all tired of hearing about Apple Intelligence and Microsoft Copilot+PC integrated into your PC’s operating system, the related privacy issues, and the delays in their European release?

If you’re one of the few who hasn’t heard about these, I’ll touch on them later. For now, I want to focus on the real innovation behind these announcements: the concept of Personal AI or Private AI.



What Are Personal AIs?

When I talk about Personal AIs, I’m not referring to AI models created from scratch just for you, but rather AI solutions, typically Open Source (or more accurately, Open Weights as Datapizza explains it well here), mainly generative ones, run on local devices under your control to ensure better privacy (provided you’re good at managing your device's security).

In simpler terms: Personal AIs let you use an AI model directly on your PC or smartphone (Personal AI) or on your server (Private AI) without needing an internet connection or third-party services.

Few things have highlighted the issue of 'privacy' as much as AI has, especially after the social media frenzy of the past 20 years. Users, battered by scandals, have realized that maybe something needs to change in how their data security is handled. Now, they expect the AI world to make this leap.

We’ve seen the first steps towards this for a few years now. Solutions like gpt4all.io or lmstudio.ai allow you to download a chat application, select models from a vast catalog (over 740,000 models on huggingface.com), disconnect your Wi-Fi, and start using the AI without needing an internet connection. If you want to go even further, privatellm.app lets you do the same on your smartphones.


How Is This Different from Chat GPT?

Every time you talk to Chat GPT, Claude, or Gemini, you’re using a web interface hosted on a server. You send your prompts and receive responses in a 'black box' manner, meaning you don’t know exactly what happens along the way and how your data is processed despite the significant reassurances in the service terms.


From the moment you hit SEND, you lose control and have to trust the system. You don’t really know where your data goes and how it’s processed and stored.

This happens because Large Language Models (LLMs) are, by definition, very large and designed to be used on specific hardware (GPUs, LPUs, and other processors and architectures) with very high operating costs compared to the consumer market. Moreover, their resource requirements are proportional to the number of parameters they contain, demanding significant computational power.


In short: the bigger the model (despite some current study limitations), the more information it contains and the more complex 'reasoning' it can perform. But this also means it requires more resources, so much so that it wouldn't make sense to use them for personal use.


But why do you use an AI model?

To simplify, I see only two main reasons:

  1. It can generate text based on a lot of useful information it already knows, adding value to your activities.

  2. It can 'reason' with new data you provide in the prompt.

(Remember, AI ‘reasoning,’ actually called inference, shouldn’t be confused with our concept of reasoning. I’d like to use ‘infer’ every time I write ‘reason,’ but then you’d accuse me of being too technical.)


Without diving too deep into why, the important thing to know is that there’s a threshold of efficiency between model size and reasoning ability that allows smaller models to be used on local devices while retaining excellent reasoning capabilities, especially with your data.



Main differences between standard and private models of AI

Besides LLMs, there are SLMs (Small Language Models) trained with less information but requiring fewer hardware resources and offering great analysis capabilities for your requests and data.

Got it?


Apple & Microsoft’s Move

Apple with its 'Intelligence' and Microsoft with Copilot+ PC are working with small models to give you the ability to have ultra-private conversations. They aim to evolve their operating systems, OSX, iOS, and Windows, into 'LLM OS’ as Andrej Karpathy dubbed them a few months ago. Essentially, they want to integrate generative AI models directly into the device, enabling them to 'reason' within your current work context, providing reliable and useful information or executing operations without your data going into 'black boxes' on some cloud.

Integrating these systems also tackles a major issue regarding AI output quality: the quality of context,  as I mentioned in a previous article here. Every time you start a conversation with a Generative AI, you need to explain who you are, what you do, and what you specifically want to get quality results.


If an LLM or SLM could track everything you do on your device, access all your data, know in real-time what you’re doing and what you need, and even without prompts but with a few clicks, perform complex tasks, it would save a lot of time.

Moreover, the required computing power is transferred to your device’s capabilities, not needing a data center with immense computing power (I’ll discuss AI’s energy costs eventually).


And for Businesses?

For businesses, this means two things:

  1. Providing employees with a work environment with Personal AIs that maximally preserves data confidentiality.

  2. Evolving the concept into Private AIs and bringing any 'open weight' model into their information system infrastructure, knowing exactly how data will be processed and used.

The challenges will include managing cybersecurity risks, implementation costs, and extensive operations activities for maintenance (AIOps).

For instance, a bank could use a private AI server to analyze real-time financial transactions, detecting fraud without sensitive data ever leaving the company infrastructure. Or a company could develop an AI Assistant to access sensitive HR information, ensuring the data stays within the company’s perimeter.


So what

As I’ve been saying in my workshops for months, expect to be ‘surrounded’ by smaller, specialized, and local models that can provide limited AI but with guaranteed data confidentiality.

What’s that? It’s not new? Sure, ever since you started seeing the next possible word to type on your smartphone keyboard, you’ve been using an AI model. The difference here is that these are more powerful and versatile ‘objects’ you can directly interact with.

And as Apple announced, the model itself will recognize when you need to switch to more powerful, cloud-hosted models when necessary!

While waiting for these solutions from major players to mature, it won’t hurt to start experimenting with some models on your devices and see how the world will change in a few months.

If you’re patient, I’ll discuss this further in my book “Assumere un’AI in Azienda,” coming out in September.


 

📢 If you liked the content, help me by spreading it.

📝 Subscribe to the Glimpse Blog so you don't miss any updates:

📚 Check out 'Glimpse,' my novel about artificial intelligence

🗓️ Contact me if you would like to organize a Workshop on AI or for any ideas.


See you soon!

Maximilian

Comments


bottom of page