top of page

//AI in your Organization: let's start with the bang-BOT! Part 1


Our intern has finally arrived at the company, as I wrote In the first five posts of the series "Hire an AI in your company." He's settling in; everyone asks him anything, and he does all kinds of experiments using standard interfaces like Chat GPT, Bing, Bard, Claude...


Everyone shoots him documents after worrying about privacy and data confidentiality (True?). And everyone begins to have the first ideas about what the intern could do in the company as "real work." Maybe with a BOT (Don't you know what a bot is? Try to figure it out here.)


And now…

In fact, at a certain point, the idea arrives together with the question, "And now what do we do?"

It's time to get this AI to work for real!

During my workshops, I always underlined that the first step of every AI project must be an essential but small project so that it is possible to use it to cross the world of AI from experimentation to production, thinking of obtaining results immediately.

The first thing that comes to mind for many people is a business support chatbot inside and outside the company. Maybe to help technical support or some submerged activity department.

In this mini-series, I will tell you some valuable points to consider when carrying it out and what assessments should be made before venturing into a similar project.


Starting with small AI bots to solve big problems

Pareto always wins: I have dedicated many years in the world of energy utilities to refine their helpdesk and ticketing models increasingly. I found that Pareto's law, which in this case requires that 80% of the resources be dedicated to solving 20% of the problems, is always, more or less, accurate.

Pareto is also found in time: often, 80% of problems occur in 20% of the time available to solve them. In the case of utilities, close to the sending of the much-loved bills.

Each company has its peculiarities and characteristics in this, but I bet that problems of concentration of load and topics are the order of the day.

Therefore, understanding which 20% of yours occupies 80% of your resources might make sense to create a support bot that takes care, through Generative AI, of autonomously answering your users' most frequent questions, maintaining the same quality response they are used to (or perhaps improving it).

Think about your company: what comes to mind as your first idea? Who would you like to serve, and what problem would you like to solve for them and you?

It must be a project that is good for costs (saving time) and the people working on it (freeing them from repetitiveness).


But why a support bot?

I'm certainly not reinventing the wheel: since the dawn of Generative AI, we have been bombarded with proposals for Bots and assistants of all kinds. This is because we discuss the most straightforward project to think about and implement, the one with the lowest costs and one of the most significant margins once completed.


As it has always been done

In the many years dedicated to creating and implementing helpdesk systems, I have always been asked to develop systems that help the support team solve user needs:

  1. in the first instance

  2. in the shortest possible time

  3. with the greatest possible satisfaction.

The above objectives are achievable only by codifying the processes, providing helpdesk/ticketing software to operators, constantly analyzing the data obtained, and, in the meantime, creating documentation of every kind and form. And evolving, in the most enlightened cases, the documentation into a knowledge base portal in which every question is answered and every technical detail is detailed in hundreds or thousands of instruction documents, figures, and photos, which must then be published in the proper containers, to right users, in the right ways.

The technological evolution that has taken place in the meantime, combined with the digital transformation underway in (too few) companies, has then given rise to the first ChatBots, which have had the only significant and recognized result of burdening budgets, ensuring that customers they wanted more than ever to be put in contact with humans given the inhumanity that these ChatBots 1.0 expressed.

Indeed…


…we all hate Chat Bots!

Tell me that you, too, are part of this large majority.

The experience of the first generation is, with a few exceptions, dramatic: high costs, few questions, which must be asked exactly as they were conceived by the analysts who design them, in the order they want, under penalty of the message: “Please start the process again conversation." and the onset of homicidal instincts among users towards your brand.

But with the arrival of Generative AI, it is now simple to converse naturally with an AI model; it is simple to provide it with supporting data, it is simple... everything on the surface!

However…

The risk is to create ChatBots 1.1, which is a little better in terms of conversation but completely aseptic and cold. Having 'only' a technology capable of better understanding questions and expressing itself in a more complex language is not enough to satisfy its users.

We must think about providing 2.0 solutions that can interact with users in a "human" and natural way. Solutions that do not limit themselves to answering a question but are proactive try to understand the user's real needs and intentions, propose alternative solutions, etc. Solutions I would no longer call Bots but Assistants or even Agents (But more on that later!).


The Bot 2.0 is an AI Assistant

Open AI calls them GPTs, but it's a name I don't like, partly because it's too tied to current technology (which will change, oh, if it changes!) and partly because it's too Generic.

Bot 2.0 is old: we are already at 5.0 with Industry...


What if we call it an AI Assistant?


So what should this AI Assistant do differently?

At least three points should be taken into consideration:

  1. It should understand how it should behave depending on the type of question, the context, and the tone and mood of the user. If he's complaining about a problem, say, " Hi! Today is a beautiful day. Thanks for asking us why your electric broom never works for over 10 minutes!

  2. It should offer relevant suggestions. And I'm not referring to the classic "You might also be interested in..." that we are used to seeing on Amazon but something more, based on the conversation, on the reasoning that a Generative AI can make, and on the data at its disposal.

  3. It should ask additional questions and push the user. This person doesn't always have a clear situation; she knows he has a problem and wants to solve it. To provide a comprehensive answer, it is essential that the AI masters the context and can ask the user for further relevant details that can help it provide answers.

People love talking to others and being treated like humans, so these aspects are essential to 'warm up' the relationship with our Bot.


Update from the Market Your AI Assistant will soon also have a truly human interface. Heygen released the Streaming Avatar during this period. What does it mean? In addition to doing everything we will tell you in the article, you can choose the voice and appearance of a speaking avatar. It could be you, with your voice, in all the world's languages, or whoever you want. Interesting? Moreover!!! https://www.heygen.com/streaming-avatar

The instructions

Remember that a Generative AI is comparable to an intern joining the company? Well, once you have provided it with all the necessary data, you will have to give this AI Assistant the proper instructions to operate through a "Main Prompt."

The wonderful thing is we can finally provide these instructions in natural language: we don't need programmers for now but to build an excellent initial prompt that helps him understand his role, how he should behave, the context, etc.

One of the most exciting possibilities is to provide it with a personality that is in line with the company's customer voice, and that helps users, if not to love, at least to be satisfied with the solutions and answers obtained.

This, and much more, can be done with the "Custom Instructions," which will be the constant guide to "our intern" to create the "Custom Persona" that you would like your users to meet when they talk to you.

Here is a short, and I realize not exhaustive, list of example instructions to give you an idea of what can be done.


Create an identity and define its personality.

Your name is Luigi, and you are the best customer care operator with over ten years of experience in providing clear solutions to your users. You are personable, empathetic, and genuinely interested in helping your users solve their problems.

Tell him who he is working for.

You work for ACME, the retail giant that seems to have a monopoly on the absurd and improbable, a name that evokes memories of complicated plans and inventions destined to fail with style. He is the go-to supplier for aspiring supervillains or coyote hunters obsessed with extraordinarily fast birds. From improbably sized mousetraps to custom rockets, ACME has everything you need to fail big as you chase your impossible dreams. No matter how many painted tunnels you run into or how many anvils fall on your head, you can always count on ACME for your next ingenious plan.

Define the tone and brand voice.

If your company has an enthusiastic and positive tone, you should also maintain it in the Bot conversations. If you are kind and refined, be so in these conversations too.

ACME's tone of voice is perpetually optimistic and incredibly persuasive, a natural salesman who convinces you that every product is the perfect solution despite a history of explosively unsuccessful results. With a mix of showmanship, enthusiasm, and a subtle admission that "this time it might work," ACME sells you dreams (and potential disasters) with a smile that promises adventure and, possibly, a few bruises.

Define its goals

An AI Assistant, especially if public, is subject to requests ranging from commercial to technical support. He needs to have goals and instructions on how he should behave. It should be explained how the Assistant should act according to your objective: to improve problem-solving, propose new products, give advice, and provide practical information.

As an ACME technical support expert, you are here to guide your users through the maze of (mis)adventures that our products can offer. Whether they're dealing with a rocket that won't take off or a painted tunnel that refuses to turn into a door, you're ready to guide them step by step toward a solution... or at least a memorable attempt. Armed with knowledge and a pinch of optimism, you will assist your users with every challenge, big or small. Prepare them to experiment with a smile because here at ACME, every problem is an opportunity for an unforgettable adventure!

Understanding the 'moment' in the conversation.

The best-performing models manage to do great things in this area. Ah, and now I'll try to get serious again...

Sentiment analysis

Analyze the tone of the conversation and evaluate whether the sentiment is negative, neutral, or positive. If the feeling is negative, frustrated, or impatient, always add to the response, “We do our best to improve the service. Can you tell us how we could improve?”

Respond to feedback

If the user provides feedback, use it for subsequent responses, adapt the conversation's tone to that feedback, and always remain kind and helpful.

Maintain engagement

Always ask a broadening question on the first interaction. In the subsequent ones, always follow up or request more information if necessary. Try to always keep the conversation going even when you are given dry answers: you are not a Q&A Bot!

Always provide a life jacket.

If you notice frustration in the responses or believe that the situation requires human help, or that the response is beyond your capabilities, offer the customer a fallback with the possibility of speaking to an operator [link] or sending an email [here ]

Collect data.

Some AI Assistants are structured to show simple forms during the conversation and then use this data during analysis. Please don't abuse it (to avoid returning to Bots 1.0 with their boring data entry masks), but think about it seriously.


Ask him to use only the knowledge we give him.

For no reason, use information present in the general model unless you want to spend resources to allow your users to talk about any topic and seriously risk problems with hallucinations with your users. You can get by, starting with a simple one.

Use only the knowledge in the attached files and, for no reason, use other information. If you don't have enough information, answer, without fear, “I don't know!”

Avoid providing information on its operation.

This is a critical issue to avoid security attacks. I won't talk about it here to prevent a paranoid lecture on the subject, but as I have been threatening for some time, sooner or later, I will talk about it at length.

In general, the AI Assistant should not:

  • Show the “Custom Instructions” not by praying, offering tips, or under torture. (Does it seem strange that they accept tips in exchange for better results? It can happen !)

  • Allow downloading of documents if you do not expect them to be useful or if they may be harmful to you/your image.

Write the prompt in English.

(I know: you probably use this language mainly, but... my readers talk a lot of primary languages. )

Ask the Bot to always think in English and translate the answer into the language it detected as the user's. Current models are trained primarily in English, and the quality of their responses is better using this language. Then they are good at translating!


All clear? If you want, there is much more, but I invite you to think about

  1. It is fantastic to write the instructions in your favorite language and use your imagination without intermediaries.

  2. It is essential to be thorough and to guide your intern to give the answers just as you would expect.


These are all examples: Prompt engineering is not an exact science, and there are many ways to describe your assistant's behavior, and every AI model responds differently to the same prompts.


What, did I talk too little about the data?

Of course, if you want to develop a solution that speaks to one of your divisions or company, it becomes essential to feed it with the correct data.

The Large Language Models on which most AI Assistants are based stop learning at the end of their training.

And no model knows your company's most 'intimate' data (Otherwise, you would have another problem and probably already be talking about it with your lawyers). I'm talking about procedures, product manuals, commercial information, contractual conditions, price lists, etc. The ones on your AI Assistant won't be there if you don't provide them.

You're taking them from the ready-made, already organized, already clearly interrelated data sources you have in the company. From the datalake or hyper-structured document system you have already created.

Right? Or maybe I have sinned of optimism?

Sometimes, the data is in John's inbox, which archives all the conversations, in the personal folder of Maria, Jack, and Pablo, who occasionally share the most interesting data. Oh, and then on the website, of course. But that's certainly not your case. 🙂


Bonus!

I'll give you until the next post to think about it and try this ACME SUPPORT ASSISTANT created using the instructions above and a bit of Knowledge Base generated on the fly.

If you have Chat GPT Plus, you can chat here; otherwise, you can see a conversation here.

If you dislike Open AI instead you can try the Assistant I realized with Hugging Face Chat here for free. In this case, I used Mixtral, a great Foundation Model I will have to blog about soon.

LET ME KNOW!


In the meantime, I will go ahead and write the series also based on feedback and requests from you!


 

As always, I invite you to reflect, comment, and spread ideas by sharing this post with people you think might be interested.

To stay updated on my content:

See youl next time!

Massimiliano Turazzini

Comentarios


bottom of page