top of page

Generative AI: a brief history of the near future!


OpenAI has recently unveiled its classification of AI systems to understand at what 'average' levels an AI model is compared to a 'classical human brain.'

This classification is in the hands of OpenAI employees for now, but some time ago, Reuters published it in part in anticipation of the following versions of OpenAI models.

I got tired of waiting for the public version because a few concepts made me scream inside, and...here I am.


 

EDIT - About 10 hours after the article was published, something has already changed! https://openai.com/o1/

If this is not the near future, this!


 





The Five Levels of AI According to OpenAI

The classification ranges from Level 1 to Level 5 and compares AI systems to human brains at various stages of evolution. This document serves as a roadmap for the future of artificial intelligence, outlining ambitious goals while providing a framework for understanding how AI creators view the world and how they believe it will evolve.

Here’s a brief summary.

Level

Description

Level 1: Conversational AI

Current AI systems from OpenAI, like ChatGPT, are at this level. These systems excel in natural, human-like dialogue, demonstrating basic understanding and the ability to respond to a wide range of questions and requests.

Level 2: Problem-Solving Virtuosos

At Level 2, we encounter the "Reasoners"—AI systems capable of solving complex problems with the proficiency of human experts. Reaching this level marks a crucial moment as it represents a shift from merely imitating human behavior to demonstrating real intellectual ingenuity. OpenAI believes we are currently at this level.

Level 3: Autonomous Agents

Level 3 envisions "Agents"—AI systems capable of operating autonomously for extended periods. These agents could revolutionize industries by taking on complex tasks, making decisions, and adapting to changing circumstances without constant supervision.

Level 4: Innovators and Creators

At Level 4, AI enters the realm of "Innovators." Systems at this stage possess the creativity and ingenuity to develop groundbreaking ideas and solutions. Achieving Level 4 would represent a significant leap forward, signaling AI’s capacity to drive innovation and progress across various fields.

Level 5: Organizational Equivalents

The highest level on OpenAI's roadmap is Level 5, where AI systems can function like entire organizations. These "Organizational" AI systems possess the strategic thinking, operational efficiency, and adaptability to manage complex systems and achieve organizational goals. While still distant, Level 5 represents the ultimate ambition of AI researchers.

Interestingly, this roadmap does not explicitly mention AGI (Artificial General Intelligence), the type of AI capable of operating exactly like a human at the highest percentiles of intelligence. Despite OpenAI positioning itself as a pioneer in AGI research, it's worth questioning whether we're still talking about GPT or whether new architectures are emerging, which will be crucial for advancing AI.

Bonus

To make this post more practical and help you understand what’s possible today, I created an artifact with Claude to summarize the five levels of AI according to OpenAI. (What is an Artifact?)


Reflections

  1. The human brain continues to be the highest benchmark for measuring intelligence. This point is crucial: Are we the most advanced intelligence in the known universe, or are we just the best we know? This question is philosophical and practical; our arrogance could lead to fatal mistakes. Our position in the universe of intelligence remains open and requires a degree of humility that, in my opinion, is somewhat lacking.

  2. The definition of intelligence, even in human terms, feels increasingly restrictive, and its classification worries me because it is reductive. Perhaps 'humanity' is the word we should reflect on when comparing ourselves to machines, along with nature. Within human nature, among various abilities, traits, and nuances, there is intelligence, or rather, intelligence. Howard Gardner, an American psychologist, introduced the theory of multiple intelligences, which includes other capacities such as kinesthetic-spatial, interpersonal, intrapersonal, musical, and spiritual intelligence. Limiting intelligence to pure logic and problem-solving risks creating dangerous, incomplete entities. If we reduce AI to an entity concerned only with intelligence understood as the ability to receive and solve logical problems—without empathy, without pathos, without proprioception, without the ability to mediate situations with emotions, without its own culture and experience, without aspirations for improvement—and seek a benchmark for understanding how AI could resemble us… well, we're leaving out many vital ingredients, and we risk creating a dangerous golem. As a psychiatrist friend studying the psychiatric pathologies of various AIs currently in circulation, I would say, at best, we are creating entities prone to a long list of psychiatric issues: Obsessive-Compulsive Personality Disorder, Schizoid Personality Disorder, Antisocial Personality Disorder (with manipulative tendencies), etc.

  3. How well will we be able to understand the intelligence of an AI? In a beautiful book I highly recommend, Life 3.0 by Mark Tegmark, the author makes an excellent analogy when discussing AGI: Imagine a group of kindergarten children (us humans) have imprisoned the last adult on Earth (a superintelligent AI). They’ve imprisoned this adult because he possesses the knowledge they lack. After all, they want it to guide them in repopulating the Earth. To what extent could these kindergarten children understand the thoughts of a superintelligent adult? How would they classify its initiatives? What would they expect from it? How would they judge its abilities? What strategies would they devise to keep it under control? From a kindergarten child’s perspective, an adult is a good adult when they make them play, feed them, and entertain them. That adult becomes a genius if they can create gingerbread houses, fun games, and catchy songs. But we know that an adult human is much more than that. In that situation, the adult would undoubtedly do something unexpected for the children, who would soon realize it is uncontrollable. Classifying AI now, in this early stage, works while the comparison is between adults (humanity) and children (AI). But as AI approaches our level of 'intelligence,' how will we assess its behavior, parameters, and motivations? How will we understand it when, even today, it’s not entirely clear why large language models (LLMs) function as they do? From my point of view, even Level 3, with AI capable of reasoning and operating autonomously for long periods to achieve a goal, is a phrase that risks dangerously getting out of hand.

Back to OpenAI

There are rumors surrounding a project called Strawberry, which could be the ‘old Q* project’ that caused a lot of concern and may have led to the departure of many key people at OpenAI (my speculation). Strawberry is the project that will take us to Level 3: autonomous agents truly capable of long-term thinking, not just following prompts.

To conclude on Strawberry: the project has been presented to U.S. security agencies, highlighting its potential in national security and advanced technologies. "Strawberry" also paves the way for an even more sophisticated AI named "Orion," which promises to push AI limits further. There’s even a chance that a simplified version of "Strawberry" could be integrated into ChatGPT as early as this fall, offering users a taste of cutting-edge technology. I’ve read recently that OpenAI is considering pricing this new creation at around $2,000 per month—a hundred times the cost of ChatGPT Plus! Who knows?

But I don’t want to delve into speculation, only reflect that every step in AI evolution will likely bring exponential growth and impact. And we’re talking years, not decades.

The ethical implications are enormous, and given the stakes (investments made), I’d say it’s time to start understanding this Generative AI better (or whatever the following levels will be called).

When? Apparently, within two weeks (today is September 12, 2024). We’ll see.

What Else Is Happening

Meanwhile, other tech giants are making significant progress.

Alibaba recently introduced Qwen2-VL, an AI model that surpasses GPT-4 in terms of visual understanding and multilingual support. This model can analyze long videos and images with remarkable precision and detail, recognizing and interpreting text in multiple languages within the visuals.

Not to be outdone, Google has updated Gemini, introducing customizable AI assistants and advanced image generation capabilities. With the new Imagen 3 model, users can create high-quality images in various styles, from photorealistic to classic paintings. These features are gradually becoming available to users of Gemini Advanced, Business, and Enterprise.

These developments indicate that, although I hope we soon move past the AI hype, we’re still in the early morning hours of this new AI era, where understanding, reasoning, and creation capabilities are advancing at an unprecedented pace.

So…

Maybe awareness of these initial steps will help us define limits.

We’re in a growth phase where, just for the hundreds of billions of dollars invested, stopping research is impossible. And I can accept that.

However, the ethical dilemmas raised by a Level 3 agent deserve far deeper reflection than what has been done.

Allow me to close with a joke: the only hope is that when AI reaches Level 4, it will figure out for us what paths to follow… and what the next 100 levels are. And give us a hand to climb them. Together.


What do you think?

Massimiliano

1 Comment


Good job, Massi (or should I say Max?). Really impressive!

We’re just insects…

Like
bottom of page