Some thoughts on AI

AI is the talk of the town these days. Though the hype from OpenAI’s ChatGPT has now calmed a bit, there is still a general expectation that AI will revolutionize society, take jobs and solve many of our challenges. I must admit that even I was caught up in the fervor and felt that ChatGPT and large language models were a big step forward towards general artificial intelligence and the automation of many of our mundane tasks along with potential tools to make scientific breakthroughs. After a couple of years, my enthusiasm is more tempered.

AI is a catch-all term for many different techniques and approaches to building a machine that can reason in some form or another. While the current Large Language Models (LLMs) have provided a new and novel way of interacting with extremely large bodies of written material, they remain a specific set of techniques and mathematical tools. My current conclusion today (August 7, 2024), is that LLMs are helpful, but are also fundamentally limited. It is my feeling that LLMs are basically a new and more advanced form of Search. They allow you to interact with very large sets of information and receive responses that have been digested and reformulated in a way that sounds human. Though the responses sound human, they are are just reformulations of information that is already contained within the large dataset. LLMs have not clearly shown an ability to reason our provide material that is not already in existence. Yes, LLMs are taking bits of information at multiple layers and sticking them together to create something that appears new and novel, but it is unclear that it is indeed anything new and novel.

To be completely fair, a large proportion of human work in both business, administration and even the arts is all about taking various bits and pieces of information, various sources of inspiration and then creating derivative works that seem new. The evidence for this int he music industry and in writing are numerous and frequent. In that sense, AI and LLMs will have a large role to play as they are doing tasks and work that humans are doing. However, just because the LLMs can do this type of rearrangement of information does not mean they can reason outside the dataset itself.

Chess is a game with a clear winner and loser and a scale that indicates the strength of players. The scale ranges from about 400 ELO to 3200 ELO for the World Champion. This fascinating study shows how an AI trained on lower level chess games (1000 ELO) can play Chess at a higher level than its training set (1500 ELO). On the surface this may indicate that an AI can move out and above its training set, but upon closer examination this is not correct. Very often 1000 ELO players make a move in chess that is worth a 1500 ELO, but the players do not consistently play at 1500. The AI is basically able to grab the best moves from these low level players and recompose the data to play consistently at 1500. Interesting, but not true intelligence. When the AI is trained on a data set of 1500 ELO, where the higher caliber players tend to be more consistent in their play, the AI cannot really exceed the 1500 ELO level.

Beyond the technical limitations of LLMs and AI, there are a number of social considerations we can think about when talking about AI and its potential adoption. AI is creating interesting artwork, but does it matter? Humans ultimately want a few basic things. They want community, they want to create something and they want to be recognized by their pears. AI has a role to play in society and can in many cases act in helpful ways – everything from online psychologists, tutors and personal assistants – but I am skeptical it can replace our core human needs.

To take the example of Art. Humans love to create things, we see this from a young age with drawings and paintings all the way to full grown adults who have attained enough material wealth to retire but decide to continue to work and create out of pleasure or egotistical need or both. AI can create infinite quantities of art, film, video games and more, but is it worth anything? If we specifically look at the art world for clues, it is clear that humans place value on authentic art work that has a social value. We are willing to spend million of dollars on a piece of work by a famous artists that is a unique piece and is recognized by society as valuable and a status symbol. Instead of buying the expensive artwork you could by a reproduction that is completely indistinguishable from the original work with the exception of its known authenticity. Even forgetting about the million dollar work, many people will not pay for non-unique works of art because they recognize that something that is available in near-free infinite quantity has no value. This is broadly true of AI generated work: it is infinitely available and therefore of little value.

What is true of the value of AI art is also likely true of much of the material AI can and will produce. There is no point in generating mountains of AI marketing material, white papers or other items if there is a set limit on consumption. In the past the cost of creating mediocre quality marketing material was the cost of labour. Today, anyone can create unlimited marketing material at a next to nothing cost. Therefore, the quality of marketing material will come to matter much more than quantity. There are only so many hours in the day for humans and there is now far more material than we can ever consume. Humans will turn to higher quality material produced by reputable brands. More than ever, brand matters. It is critical to not dilute brand through AI shortcuts or else you end up killing the golden goose. Having actual expertise and being able to effectively communicate that to other humans though services and products will become the defining element of success. This is not a fundamental change, expertise has always been highly valued, but the landscape in which we operate has changed with more noise that experts will need to wade through to have their message heard.

At my company Nimonik, we have built a Regulatory and Industry Standards AI Chatbot that allows you to interact with laws, regulations, standards and other compliance information. We have built this with a team of two people, open source technology and infrastructure from Amazon AWS and a few other sources. It is an inhouse product and does not use commercial tools like Open AI’s ChatGPT. The tests of our chatbot over the last few months with industry experts shows that it is quite good and helpful, though not perfect. The point of the story here is that it does not take that many resources to build a robust chatbot that is quite helpful. As many AI investors are currently finding out, there is minimal ‘moats’ around these businesses and their ability to stay afloat will come under tremendous pressure from competition and self built tools.

One use case that AI/Computers is really good at is interpolating data from known data sets. This was done for Google’s project around GO and then for the work to map the structure of proteins with AlphaFold. The AI models and software they built has accomplished work that would have taken humans decades. This is very helpful. It is not however quite as revolutionary as we might thing. Google has basically built a giant calculator for protein structures. It is an important milestone that will accelerate medical research and discover and we should celebrate this accomplishment, but the tool remains a human driven tool.

Last, but not least, I recently had the chance to test Tesla’s famous Full Self Driving. I have owned a Tesla for 3 years and love it, but we did not purchase the Full Self Driving due to its cost and our rather infrequent use of our car. Tesla offered us a 1 month trial of this technology that Elon Musk keeps claiming to be near perfect and able to drive us anywhere with minimal human intervention. I was not impressed. On the highway the car did fine, but as soon as we were in town or had some slightly abnormal configuration the FSD stumbled. This is despite the fact that there are a ton of Tesla’s in Québec (maybe about 300,000) and therefore Tesla should have a very good dataset of our roads and infrastructure. From everything I saw, Tesla’s FSD is not capable of extrapolating outside of its knowledge base and really struggles with surprises. This is a fundamental problem that no one seems to have solved. The only true self driving experience is Waymo and they have solved the problem not through complex AI, but through good old fashioned super detailed mapping of roads and infrastructure. It seems that even Elon has not outsmarted a more primitive and basic approach to self-driving.

AI remains interesting and promising, but my main conclusion is that we are still far off from General Artificial Intelligence and it remains debatable whether it is even possible. For now we continue to make substantial progress in computer science, mathematics and computational power. This has produced OpenAI and other helpful products. We have stuck a label of AI on these tools and companies, but in reality they seem to be very much a continuation of our previous work that created all the technology and software we use today. Progress will continue to be made in a progressive and iterative way, the way it has always been done. Do not hope for miracles, but do expect more progress on many fronts.

Published on August 7, 2024