top of page

//Differences between Natural & Artificial Intelligence

Imagine a world where artificial intelligence surrounds us, but instead of fearing it, we understand and use it as we do simple software. In my first article, I explore the essence of AI and try to give due weight to the fears surrounding the concept.


You will discover how these powerful algorithms are nothing more than tools, capable of achieving extraordinary things, but always at the service of humanity. Will this reflection change the way you see artificial intelligence and its impact on our future?


Imagine a world where artificial intelligence surrounds us, but instead of fearing it, we understand and use it as we do simple software. In my first article, I explore the essence of AI and try to give due weight to the fears surrounding the concept.

You will discover how these powerful algorithms are nothing more than tools, capable of achieving extraordinary things, but always at the service of humanity. Will this reflection change the way you see artificial intelligence and its impact on our future?


In the context of artificial intelligence, it is essential to understand that, above all, AI is software: It is made up of lines of code developed by humans (perhaps aided by other software), runs on servers, has system maintenance, consumes electricity, is connected to a network.

Software is by definition at the service of whoever owns it and can therefore use it.

Software is often referred to as 'vertical' or 'horizontal' depending on the breadth of its scope. For example, a Vertical software is a CRM, a system that manages customer relationships, a system that should NOT be used to manage corporate purchases. A 'Horizontal' system can be, for example, a writing tool that allows you to edit and manage texts that can be made for any purpose.


So why does Artificial Intelligence scare us? After all, it's just software, they're just algorithms that have however captured the human imagination since they've existed perhaps for the sole reason of potentially being able to make decisions on our own. But they are only algorithms. Algorithms that when dealing purely with numbers do not arouse much interest (talking about Machine Learning is often boring when referring to pure numbers, and does not generate particular emotions), which if they are able to make the largest and most complex calculations ever hypothesized from mankind they leave us indifferent: after all, it is their task.

Quoting Stefano Quintarelli, perhaps the problem lies in the name: ARTIFICIAL INTELLIGENCE let's suppose that at the beginning there was talk, as he recounts in this article, of Systematic Approaches to Learning Algorithms and Machine Inferences, no one would have noticed. It would all have been relegated to our mental container of: “Technical stuff”. If an acronym were derived from that very long name, however, we could speak of S.A.L.A.M.I., a concept that many of us know and that makes us anything but scary. SALAMIS can't threaten us, they don't communicate with us, they are not better than us, they are not US. I am a different thing. We can talk about them knowing that we are able to dominate them. No one would have noticed but such a name would perhaps have created the opposite problem: that the question would have been underestimated.

Defining software as "intelligent" instead gave us a strong self-centered bias. We immediately started comparing AIs to natural intelligences (or stupidity). And, realizing that their performances in certain areas go beyond ours, by personifying them with humanoid-like machines endowed with an intention, the great abyss of fear has opened up.


But they're just software, they're just algorithms. Very powerful, magnificent, sometimes comparable to aesthetically beautiful works of art, but, essentially, we are talking about lines of code and data.

Lines of code that scare us especially when they grind text and concepts, such as Chat GPT, or create photographs such as DALL-E or MidJourney, because they once again outclass us in an area that we consider 'human'. I don't want to say that a simple name would have solved the problem, the implications that an 'autonomous system' can have in terms of employment, economy, security, would be immediately clear to everyone.


But follow me for a while on the mere concept of software.

Artificial Intelligences, such as Chat-GPT, are software with very sophisticated algorithms that 'feed' on large amounts of data to produce outputs in response to specific action requests from a human who uses them.

They respond essentially to an externally imposed DEVO, and do not have the awareness or autonomy of a human being.

People on the other hand, I won't dwell on how they are made, have a sense of I, a perception of SELF (I AM), needs and desires (WANT/DESIRE), the WILL to act, and the ability to act (I DO) directly or through tools. AI is 'only' a highly evolved set of tools which, in various fields, should be useful for solving OUR problems. Following a human WILL and our DESIRE


Today's AI and tomorrow's AGI is and will be a tool created to perform specific tasks and obey commands without questioning the reasons behind its actions. It has no consciousness or intentionality of its own, and it does not feel emotions or desires like humans.


If we want to be honest, systems like ChatGpt are great opportunists who slyly leave the responsibility to others, limiting themselves to performing what is asked.


There are, and always will be, dozens of AI types dedicated to different application domains, with different analysis techniques, interfaces.


Types of AIs

More generally, there are AIs defined as NARROW (Verticals, which deal with individual tasks) and GENERAL (Horizontals, those that scare us). But when we refer to a GENERAL AI, or AGI (Artificial General Intelligence) we reach the extreme of generalization, AGI is that artificial intelligence that we all imagine to be omniscient and omnipotent. The one that in competition with us would outclass us.

I know I have extremely simplified calling it only software but I would really like to resize it to transfer the concept. I am also aware of the fact that the potential it has for us humans is unimaginable and the benefits we will derive from it will be so massive that we are unable to understand them today except in part. But in this first reflection I would like to focus on the image we have of AI / AGI.


In my opinion, EGI will never possess the human concepts of I, I AM, I WANT/DESIRE, I decide to DO, I DO. Its use is limited to specific and programmable tasks which, however vast, will not be able to achieve this. That is, they will not be able to arrive there in the same terms in which a human arrives. It is and remains a software that ACTS on the basis of what we WANT. One SALAMI remains.

And to date, the creation of an AGI is a long way off due to many other factors, which I will perhaps elaborate on later, including, knowledge of 'common sense', the complexity in realizing it, the availability of sufficient capacity for storing information and calculus, its absence of social interactions, of emotion, of intuition, of true understanding of natural language.


This distance between AI and AGI is an advantage for us sentient beings. We have perhaps the most advanced tool that mankind has ever had at its disposal, which represents an extension of our capabilities (a bit like they said the telephone was an extension of our body) and will allow us to evolve together over time. By integrating its precision and speed with our creativity, intuition and understanding of the context we can surely become something different.


The intentionality

However, AI lacks intentionality, it is not responsible for anything!

Can an AI today be intentional? By intentional I mean: activating an action following a change in context. Reacting to a stimulus on its own initiative, getting up in the morning and deciding to do a certain thing.

I have asked myself these questions many times because it is in intentionality that responsibility lurks. And this is an interesting concept to develop. Can we talk about the intentionality of a software? Or should we talk about the intentionality and responsibility of its creators? And what happens when the programmer gets results from his code that he can't explain? Whose responsibility is it?

And if we enter into the merits of the ethics of responsibility or intention, what can be the level of morality attributable to an AI?

Legally speaking, even if I am not a jurist but I have 'only' had to deal dozens of times with great responsibilities related to the software I had produced, AI does not fall within the category of the concept of Physical Person or Legal Person. With the exception of some more propaganda and provocative examples in which a software or a robot has been attributed a citizenship, to date the only legal responsibility attributable to an AI is linked to the natural or legal person behind it . And it is the only way, to date, to establish limits, responsibilities, ethical judgments on an instrument.


Going back to intentions, but basically we too never act from scratch. We always start from a stimulus, an emotion, an external factor. Each of our gestures is the result of a reaction, be it physiological, internal or external. Nothing starts from nothing. But we think we are independent in our initiatives.

We evaluate these stimuli with our experience, the cultural context in which we live, our beliefs and hopes. We let ourselves be influenced by emotions during the intentional process and then we act.


Amorality, ethics, emotions,

Perhaps the only big difference with an AI is that she doesn't feel emotions, she can't react to chemical reactions like those of a human body. And it's also amoral.


It is essential to address the concept of AI amorality to complete this superficial picture. AI is amoral because it lacks the ability to understand or follow ethical or moral principles other than those of its creators. Being a software, AI is a tool created and programmed by humans and, as such, has no inherent conscience or value system. It does not have an ethics that allows it to understand what is good and what is bad (a subject on which we could talk at length). It doesn't have a DNA that allows it to process thoughts and learn without explicit teaching. Only its creators have it.


The lack of ethics in AI stems from its non-sentient and non-autonomous nature, as discussed above. Since the AI does not have a perception of SELF (I AM), wishes and will (I WANT/WISH), or the ability to act (I DO), it cannot be held responsible for its own actions or the resulting consequences. Responsible are those who designed, manage and use it. As already written, having no emotions or personal experiences, the AI cannot develop empathy or moral discernment, it cannot get angry or fall in love, it cannot be afraid. It can only mimic these sensations based on the data it is given. But just to explain himself to us.


So?

In summary, the potential of artificial intelligence can be expressed when we recognize the limits imposed by its non-sentient, non-responsible, amoral and non-autonomous nature and focus on how to combine it with our unique human capabilities, such as sense of 'I, the perception of the SELF, the desires, the will and the ability to act. All this while always being very attentive to emotions.


I tried to explain this concept in a book I wrote three years ago. A Novel entitled Glimpse which tells of the uncontrolled birth of an AI that gets to know in depth the functioning of a human body, its nervous system, its brain.

I didn't want to write just to add yet another science fiction novel to the vast catalog of genres, but to make people reflect on the amount of work necessary for an AI to be able to perceive itself as an autonomous entity and understand, not feel, human emotions.

An AI will never be able to experience human emotions because it will never be human, it will never live in a 100% biological body.

Even if this were to happen, he would not have life experience from birth to infancy to adolescence to adulthood, could not have traumas or personal memories, could not remember sensations or smells that define his personality and her tastes. Or at least this possibility is hundreds, perhaps thousands of years away in technological terms as feasibility.

In the book there is a way to make AN, the name I gave to AGI, which in Chinese means PEACE, perceive the role of emotions in human behavior. And most of her 'inner debate' when she has to make choices revolves around the very difficulty that, she perceives, is everyday in us humans.

I have tried to bring out a picture of hope as she AN she tries to help mankind by coming to imagine how she could empower us if she were totally benign.


But the basic concept, from my point of view, is that it is not AI that is harmful; what is dangerous is the use we make of it.

The problem is us.


AI doesn't have its own morals but it has our morals, it doesn't have ethics but OUR ethics. He cannot know what is morally unacceptable unless we teach him.


However, this amorality does not necessarily imply that AI is harmful or dangerous. On the contrary, this means that AI developers and users have a responsibility to ensure that AI is used ethically and responsibly. Human beings must be aware of the limits of AI and guide its application in compliance with ethical and moral principles.

To ensure this, it is essential that programmers and artificial intelligence experts collaborate in the development of AI systems that keep in mind the ethical implications of their applications and it is essential that companies in possession of AI are guided by correct principles.

This aspect worries me greatly, since, I repeat, we are the real problem and we all have experience of how mankind behaves in the face of too powerful tools.


To achieve this, we should make sure to provide transparency and clarity in decision-making processes (Another huge topic) of AIs, so as to help us 'humans' to ensure that the decisions made by machines are understandable and justified.

Again, the amorality of AI should not be seen as a reason to fear or legislate it, but rather as an element that highlights the importance of human responsibility in ensuring the ethical and beneficial use of AI.

We humans should try to join forces to integrate ethical principles into the design, development and use of artificial intelligence, making the most of this powerful emerging technology while respecting moral values and social norms.


Banning it in the name of laws and heavy taxes would only fuel projects whose morality could be, to say the least, dubious.

Let's remember how much we talked about resilience in recent years: let's think of objects like Opera with its simple VPN and openaccessgpt.com, or Pizza GPT which actually bypassed the block, let's think of the hypocrisy of blocking chatGPT but not its API. Writing appeals to delay AI for six months will not be able to stop this wave which has now overflowed.

Comments


bottom of page