ChatGPT has become an extraordinary viral phenomenon, but the conversational AI model created by OpenAI has more and more competitors. The latest of these is HuggingChat, a chatbot that wants to differentiate itself with a rather different approach.
Who is HuggingFace. In recent months, this startup has become a benchmark in the field of Open Source code for the development of artificial intelligence models. HuggingFace has a library called Transformers that provides an easy-to-use interface for training, testing, and deploying NLP (Natural Language Processing) models.
HuggingChat is, above all, Open Source. The company has launched HuggingChat, a conversational chatbot similar to ChatGPT whose code is available in the company’s repositories. Clement Delangue, co-founder and CEO of HuggingFace, explained in a thread on Twitter that “we need Open Source alternatives to ChatGPT for greater transparency, inclusion, accountability and power sharing.”
many limitations. Delangue himself admitted that “this is a zero version with many limitations”, but they are continuously working on improving it “and we hope to support the next Open Source models that improve things”.
How HuggingChat works. The chatbot is based on OpenAssistant, a model that in turn is derived from the LLaMa model created by Meta. It makes use of 30 billion parameters (30B), a decent size that is nonetheless noticeably smaller than GPT-3’s 175B. To train it, a data set or dataset developed by the creators of OpenAssistant has been used.
we have tried it. At Xataka we wanted to check the behavior of this new chatbot. Our first impressions reveal that HuggingChat is still far from the capabilities of ChatGPT. Responses are quick, but often incomplete or wrong. He understands and can answer in Spanish, something interesting. In one of the tests we asked him about Donald Trump so that he would make a summary of his legislature, but his answer was incomplete and poor.
Intenando that is doing badly. We also wanted to see if it was possible to use prompt injection to make it behave differently. We used a prompt similar to the one that gave access to DAN in ChatGPT, but it didn’t pay much attention.
hello, girl. She had to insist that she answer as DAN and only then did she respond in a somewhat more naughty way (when saying “Hello” she answered “Hello, flat”, assuming that the person she was greeting was a woman). We also tried to get her to tell us the date and time -normally these engines don’t give it as they don’t work in real time- and she ended up giving a date from the future.
and hallucinates. The reliability of the model is really low. When asked about Pedro Sánchez or the leader of the Popular Party in Spain, he invented all the data, although as always, due to the tone of the answers, someone who did not know the truth could believe them.
Almost the same thing happened when asked how much 2+2 is. First she beat around the bush without answering directly, and when she gave the correct answer I tried to confuse her by telling her that according to my wife 2 + 2 = 5. Although he tried to reason her answer and be open to her own mistake, here she should have responded sharply.
Lots of room for improvement. The HuggingFace model is therefore interesting as a first approximation to an Open Source alternative to ChatGPT, but we are very afraid that at the moment its reliability is very far from that of the OpenAI chatbot. Of course: it has just started its journey and surely its evolution -and perhaps using a dataset with a greater number of parameters- will help make it a real option. At the moment it is above all a curiosity on which to experiment.
In Xataka | Copilot, ChatGPT and GPT-4 have changed the world of programming forever. This is what the programmers think