top of page

//LLaMA's arrived in AI!



Today I would like to start my reflection from a Spy Story, or from a 'Hacking' Story, or from a story that is not yet clear in origin but that is having bursting effects. I know it's a bit old now, having dragged on for three months, which in these super-fast times seems like three years, but its consequences will be echoed for a long time.


Prologue

In this post we get into the technical a bit. But it doesn't hurt.


The overall early 2023 sentiment about AI was this:

  • OpenAI: "Beautiful. But here's another company that wants to become Big Brother"

  • Microsoft: 'It didn't succeed with Cortana, it wins easy and you buy the big toy (OpenAI)'

  • Google: "We'll see when Google Bard arrives, they already know it will take a moment to regain the market!"

  • Meta: "Meta? what do they do?"

  • So what? "Then there are a few Nerdy things we'll see but... this time Google and Microsoft will play the game."


Instead...


Instead, in very rapid sequence, as seems to be very fashionable today, a few interesting things have happened.



1

In the urgency of launching their own AIs, Meta released the fruit of its effort and investments, LLaMA, which stands for Large Language Model, on February 24m .

A little timidly, I would say, through a post on their blog, they said had a Transformer model (Yes like GPT, both rely on a Google invention) but, not yet ready to launch it to the general public, he would have made it available under a GPL license (I.e. Non-Commercial, forgive me for the simplification of insiders) to


We believe that the entire AI community — academic researchers, civil society, policymakers, and industry — must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular. We look forward to seeing what the community can learn — and eventually build — using LLaMA.

Meta has slowly started collecting registrations from those interested in testing this wonder of his first (with the not negligible goal that they could understand how and why it works) that, he promises, outperforms GPT-3 in every aspect.


2

On March 3, 2023, as The Verge reports, the entire model was 'cracked' i.e. downloaded and made available online by a user connected to 4Chan, a 'BBS' style site with a lot of questionable content inside.

Less than a week after publication...


3

Not bad, one might think: stealing only the model, without its parameters, without its instructions, without RHLF (that is, the configurations made to a model following a series of human feedback) is like taking home ten big tree trunks and thinking about building all the furniture in the house without having a minimum of experience as a carpenter. And Meta rightly began the actions of warning to anyone who used it because, as much as for public use, the licensing terms she imagined were decidedly different.

And, probably, Meta has moved well: think about how much OpenAI has emphasized the importance of running its model on mega-servers, with highly advanced and very expensive (in tens of thousands of euros) High Performance Computing (HPC) boards. Open AI, it is estimated, uses at least 30,000.

Important step, to keep in mind for later, is that Meta has not denied having suffered the theft and has maintained its policy of openness to the possibility of using it; but on its own terms. Otherwise the user would have been considered 'off-licence'.


4

We humans are wonderful when it comes to doing things that are forbidden to us. Imagine thousands of researchers and developers working on similar models for years that can for the first time get their hands on models made by a group like Meta that has the capabilities of GPT.


There are tens of thousands of people who have mobilized. And don't think about the dark web: universities were the first. Within a month (1) instructions appeared on the web on how to use the model and how to fine tune (Through AI), quality improvements, a few thousand new versions including Alpaca and Vicuna.


Then someone started running the biggest model on 'few' GPUs available to university environments, and others published their story of how a good programmer, a powerful enough computer (Mac Pro M2) and a free evening to have their Personal Chat-GPT was enough

Have you ever thought of programmers who are happy to work with an LLaMa?

Have you ever thought of programmers who are happy to work with an LLaMa?

Now hundreds of examples of how to run on a mini-pc or on a Smartphone these perfectly trained AIs are available on-line, ready to behave and able to accept the content we decide.


No more ethical or moral filters, enough to the control of biases, end of discriminatory reasoning: free synthetic thinking finally!



5


Leaks, thefts, but they don't end here in our history.


In April, an unspecified day, an internal Google document then taken from the SemiAnalysis blog ended up on discord. Here leaked the opinion of a Google researcher who writes a few interesting things to his colleagues expressing his opinion.

Basically, according to this anonymous, neither Google nor OpenAI will be able to win the AI race. Because the open-source community is surpassing both of them by having already solved problems that in the big ones are still completely open and unsoled.

According to him, open-source models are faster, more personalized, guarantee confidentiality by design and are more capable, in terms of value for money than those of the two competitors.

In the long run according to him the best models will be those that can be iterated quickly. It means those that can be re-trained from scratch with no biblical cost and timing (Did you have you wondered why GPT is stopped in September 2021?).

Also consider the fact that these models are gigantic black-boxes and link this aspect to the controversy resulting from copyrighted content available to them. If the holder of a copyright could tick a lawsuit asking an LLM (Large Language Model) to remove its contents, they would be forced, except for developments not yet on the market, to redo the training of their model from scratch with unthinkable costs.


Many smaller models, on the other hand, would be more 'manageable' and could be re-trained for a few hundred dollars each.


Do you understand?

One of the questions I most often asked myself and with insiders in 2022 was:


“Can you imagine the degree of responsibility a company like OpenAI has? How powerful has Sam Altman, its CEO, become thanks to Chat GPT?


What responsibilities do AI companies have that with their systems can shape the way of reasoning, decide what is politically correct for AI to respond, propose ways of learning, give its version on history by always being able to excuse themselves with the excuse of hallucinations (Do you know what they are? If you want me to talk about it write it in the comments plz), take responsibility that we are satisfied with his answers and themes of this character, be able to manipulate any content without our knowledge and impose ourselves not to do so for ethical reasons.”


Well, today, technically each of us could have some of these powers, we could shape our AI without moral and ethical filters, make it available to everyone and... ah no! The license prohibits it!


So the world can be divided into three factions:


1) who will ignore this possibility;

2) who will try to grasp it honestly by respecting the rules;

3) who will take advantage of it and that's it.


Let's talk about the second group for now: using LLaMa, or one of its variants with Andean camelloid names (a post on the fantasy of computer scientists when they are not driven by marketing sooner or later I will have to write it) on their own technological infrastructures would mean having private access to an AI that can be trained in a short time and little money (a few



Useless?


Two simple use cases to give you some ideas.


Personal side:

Share all our lifelong folders, all emails, chat backups, your digital notes, internet browsing history and bookmarks, the music you listen to, voice recordings, photos and videos (there are already multimodal versions on-line, i.e. able to work on audio and images as well) and everything you can imagine is digital. We just exclude passwords but anything that is 'our' digitally could be given to a private AI model. That at that point he would know much more about us than we do ourselves know.


What could we do about it?


Much or little is up to you but working a little imagination, or directly asking ChatGpt for a list. Here are some ideas provided taken directly from the new 'Share Conversation' feature.

Basically I created a prompt where I entered the last 2-3 paragraphs of the post and made GPT continue.


Corporate side

Here a universe opens up where, without any inconvenient borderline activity, the next industrial revolution can be unleashed. I will discuss this in more detail in a dedicated post. But consider this scenario: if a company links this AI to ALL its data - emails, documents, management databases, CRM, personnel data, etc.


If it also focused on compartmentalizing access to data, an entity would emerge in the corporate sphere that would represent all the knowledge accumulated by the company in its history, at least as far as digital information is concerned.


Here are some impacts, just to stimulate the less imaginative and to avoid going straight to the technical points.

  1. 🔮 Many colleagues present in the company and recognized as 'historical memory' would suddenly become obsolete: everyone would ask the Camelide.

  2. 💼 There would be no more excuses to say 'I didn't know': just ask the AI with Andino camel names to have any information accessible with their security profile.

  3. 🔁 The assumption of responsibility, a practice that is disappearing in many companies, would cease to exist and the phrase most used by some people: 'HE said it' referring to the supreme boss, would remain unchanged. It would simply now refer to AI.


So...

Playing with the Lamas, MidJourney and Firefly really amused me in this article. Forgive me.

In this exciting semester in which generative AI has become the trending topic, it is being said:

  1. That the road to conquering dominance on the AI of OpenAI/Microsoft and Google is still all uphill.

  2. That Meta is now in a delicate position to figure out what to do with its licensing model. And it seems that every choice he makes will be wrong.

  3. That the Open Source community is ahead has more powerful tools, and reproduces much, very quickly: on Hugghingface.co you can already find over 1,800 models of camel variants available. Will the best product be able to establish itself as a standard this time?

  4. Will companies that are now approaching AI feel like ignoring this new opportunity?

I am curious to understand your opinion!


OOPS...


...I haven't finished reviewing the article and something has already changed, again.


There is talk of "Falcons" on the horizon who have swooped into the LLM community solving some of the Lamas' problems and improving them further.

Soon an update, I have to study first...








Comments


bottom of page