top of page

//Conversing with AI: expectations and results

More and more people are using Chat-AIs like GPT, Bard, Claude. More and more people are telling me how happy they are with the results they are getting.

However, many of them are perplexed, sometimes angry, because they can't get EXACTLY what they want.

And they often take it out on AI, make fun of it, belittle it, defending human primacy.

Is it a problem of expectations? Our skills or AI?


What do the GAI need from us?

GAI (Generative Artificial Intelligence) are getting us used to ever simpler interfaces. Now you just need to know how to write to activate them.

But let's remember that they are still software:

  • They therefore need to be known.

  • They need us to understand their limits because they don't know how to do it.

  • They need us to tell them things clearly and precisely, to detail the results we expect from them.

  • They basically need GREAT PROMPTS!

At first sight it seems easy: we are used to executing things in sequence based on our experience every time we produce something: a text, an image, a code, a report and even an email or a whatsapp message.

But the process by which our intuition becomes an action is usually mental, not stated. It is our mind that elaborates the concepts, then our body does what the mind tells it to do.


But, if you think about it, we never think aloud about the individual passages.

When we get a message saying

“Hi, I'm Riccardo, can you send me the presentation on product XY that you made last week to the Marketing team?”,

we know exactly how to click on REPLY and attach the requested document, perhaps inserting a note, which we remember to do until we compose the email, to thank him for the request. All automatically without thinking too much.


But with an AGI we will have to think about all the actions to be performed, describe them in detail, worry about everything that will have to be done at the beginning and then explain it to him. Something like:

“Retrieve the presentation made on Wednesday 7 July at 14.45, save it in PDF, send the PDF attachment via e-mail, then tell Riccardo that you are happy that he asks you for the presentation in an informal tone, greet him and suggest that we meet sometime to talk about it.”

Imagine this issue brought to more complex problems: are you able to detail each logical step that needs to be done?


Explaining exactly what you want, how you want it done, is a skill of few: it requires clarity of vision in things, written communication skills, ability to break down complex concepts.

And if the GAIs are of American origin, it would be better to learn how to ask them questions in English to get even better results.


So, what is a key skill to use GAI? Know how to communicate in writing and know how to understand the results in a critical way.


GIGO (Garbage In → Garbage Out)

An old adage in computer science, Garbage In → Garbage Out (GIGO), holds that if you feed poor quality data into a system, you will get poor quality results. This particularly applies to AI as well.

This is a big problem that is mostly addressed during the model training phases (albeit with the limits of ethics, the culture of those who do it, their biases). But it also applies to prompts.

If your question is vague, confusing, or poorly worded, you'll get answers that may seem good, but have the same flaws as the original question.

You can realize this by asking the AGI for information on topics you know very well, on which you are competent. It is always important to balance these expectations with an awareness of their limitations.

For example, I have a lot of trouble explaining to MidJourney, an AGI specialized in generating images, how I want the image to be generated. Because I'm not an expert on the subject, I can't describe the pictures well. And that frustrates me a lot, takes a lot of time, requires me to study and rehearse and rehearse, because I expect MidJourney to be able to do more. But the problem is me as you can see from these generated images. I just wanted a garbage data flow that arrived from the left, passed through an AI and always exited on the right as garbage: I'd say I couldn't do it!


Starting from left a floating Flow of garbage directed to an AI robot that tries to collect and sort all this garbage without success and outputs garbage itself. --ar 16:6 --s 250
Starting from left a floating Flow of garbage directed to an AI robot that tries to collect and sort all this garbage without success and outputs garbage itself. --ar 16:6 --s 250

a flow of garbage starting from left side of the image in a funnel connected to a perplexed AI Head that outputs even worst garbage. --ar 16:6
a flow of garbage starting from left side of the image in a funnel connected to a perplexed AI Head that outputs even worst garbage. --ar 16:6

In essence: the more we know, and we are conditioned by what we have learned, the more we know we don't know.

If we don't know anything, we tinker a bit and expect to be experts and that GAI will solve all the problems for us. And if they don't, we blame them.


The Dunning Kruger effect

This is a very complex philosophical and psychological concept and I certainly won't be the one to explain it for the first time since Socrates already spoke about it with his I know I don't know".

But I think it is essential to share it because once understood it can help us to obtain different results (in many fields of life).

Trying to be as simple as possible:

The Dunning-Kruger effect is a psychological concept that describes a cognitive distortion. According to this theory, people with limited skills or knowledge in a certain field tend to overestimate their abilities, while highly competent people tend to underestimate theirs (although not always).

Psychologists David Dunning and Justin Kruger, of Cornell University, developed this theory in 1999. According to them, the lack of awareness of incompetence stems from the fact that the skills needed to be good in a given field are often the same skills needed to recognize competence.

In other words, those who are not competent in a field do not even have the necessary skills to recognize their own incompetence. Conversely, highly competent people tend to assume, erroneously, that the skills they possess are common to everyone.

The result of this phenomenon is that people with a low level of competence tend to overestimate their abilities, while people with a high level of competence tend to underestimate their own. This can lead to a variety of problems, including lack of preparation, overconfidence, and the presumption of competence where it does not exist.


Do AIs suffer too?

Sure!

We should also talk about the fact that social media and a bit of the whole web suffer from it, but here we will focus only on AI.

AIs are unaware and know what they have been input as training data. They are not curious and, to date, are not even able to 'decide to study alone' as any human would.

They do not have the ability to recognize the limits of their own competence, nor to understand how little they know beyond the data with which they have been trained.

Furthermore, just as a person with a low level of proficiency might overestimate their own abilities, an AI system could "believe" that it is capable of performing tasks for which it has not been adequately trained, producing unreliable or even dangerous results. (here I cannot fail to mention my two posts on AI hallucinations)

If we then assume that they know 'everything there is to know' you understand how easy it is to get poor, misleading or dangerous results.

In short, a kind of crossover Dunning-Kruger effect.

Some of us may overestimate the capabilities of machines, believing that AI can do more than is currently possible, while others may underestimate the capabilities of AI, not realizing how advanced it can be and how it can be used effectively. All mixed with our presumption or not of competence.

In both cases, the keys to overcoming this type of bias are education and awareness.

Just as awareness of our ignorance can help us become wiser and less conditioned, awareness of the limits and potential of AI can help us use it more effectively and responsibly.


So, what?

Taking at face value every stream of garbage that an AI provides in response to our poorly posed question will only flood us with errors and biases, both human and synthetic.

If we completely delegate each answer to the AIs without bothering to check and deepen it, the chances that we will become even more lazy and incompetent are evident.

But if we make an effort, and have the humility, to continue to improve and learn how to use them, to exploit their results to improve ours by adding that 'touch of humanity', the effects and benefits will be exponential.

On the other hand, underestimating them too much because they don't give us the desired results risks making us lose immense opportunities.


So, next time you use an artificial intelligence, remember that you are the human, the one who knows, the one who decides. Take responsibility for your words and thoughts. AI is just a tool.


Also remember that when you use an AI you are learning. You are learning about technology, but also about the issues you are asking about, and especially about yourself: how you think, how you express yourself, how you define problems and how you interpret answers. You are developing new skills, new ways of thinking and new perspectives.



 

As always, I invite you to reflect, to ask questions, to spread ideas. And if you haven't subscribed to the blog, do it now to stay updated.

And by the way, do you like these posts ? Let me know in the comments or contact me!

See you next time!

Commentaires


bottom of page