The Dark Side of LLMs

Diego Pacheco
11 min readJul 27, 2024

--

AI is the biggest hype right now. It is not as new as people think, starting in the 1950s. Big advances happened recently since 2017 with the transformers architecture, which is the heart of all generative AI. There is a great potential for big disruption because of AI. We are seeing great improvements but we are not even close to AGI. AI can be practical and real, add value to the business and improve our lives. AI is narrow at the moment and has many challenges. Pearhaps one of the industries that has the potential to be highly disrupted by AI is actually the technology industries and engineers. Large Language Models (LLM) can do amazing things. From generating text, generating creative images and videos(with lots of problems), and even generating code. It’s absolutely normal to be concerned, but the more you understand what’s actually going on, the less you need to be worried. If you are an expert you will be fine. We saw a great leap and boost of evolution, it does not mean we will keep s seeing such growth year after year. That does not mean we can completely ignore AI and pretend is not happening. But there is a big difference between learning AI vs AI will take all our jobs in two years. Just before I start talking about the problems and the challenges, let me be clear, I think AI can be good, and we can use it to build better products and tools; however, it is not a panacea, and it’s not the solution for all problems.

Hype

The thing about hype is that often, value comes later, after lots of inflated expectations. During the cycle, a lot of crazy things happen. That’s where usually things go bad. I really hope we can pass this cycle of hype as soon as possible so we can actually be productive and make some good use of AI.

People believe in different things. Some people think code is bad, and the best thing is to just buy all the products of the universe, and staying away from code is the best thing a company can do. I personally believe in the opposite, that code actually is code, and you want to have code. One bad thing about the AI hype is that people think engineering will be dead and no one will be coding in two years, which is a very wrong idea, and I really doubt it will happen that fast. To some degree is almost like the AI hype creates a Fear Understancy Doubt (FUD) that:

  • You don’t need engineers, everything will be completely automated
  • Stop learning engineering just focus on AI
  • No One will be coding, so don’t build anything

The last one, for sure, could be the worst. Because like I said before, if the culture you have is anti-coding and anti-building, this is for sure gasoline into the fire.

More focus on bad products; Code is not the problem.

Now, I would like to say that not all products are good. People romanticize products and think they are perfect and can fix all the problems of the universe. But the realization is that even when you buy a product, you always need to do some sort of integration; however, because you don’t have the code, that can be quite hard and lead to poor user experience. We need to avoid the trap that AI will boost products in a way that we don’t need to do anything and just take a vacation and let the robots work for us; we are not quite there.

AI is not magic and cannot fix all the problems companies have, especially because some of these problems are quite unique. Also, think about this: Who is best at being creative, AI or humans? Who do you want in control: AI or Humans? So I believe it depends. For sure, humans are more creative than others. AI, for sure, is much better with numbers, and as humans, we can make mistakes, but hey, AI can make mistakes too, actually a lot of them.

Code is not the bottleneck

I need to say this, code is not the bottleneck. Typing is pretty easy; people can type pretty fast nowadays. A lot of people can type much faster than chatGPT can generate answers. The issue is understanding and figuring out what to do. Remember, when you use LLMs, you need to tell the LLM what to do. What if you are not doing a good job telling the LLM what you want? Now think about this, we have been doing engineering for more than 50+ years, and we still have issues discovering requirements, understanding the users, and figuring out what works and what does not work. So the limitations are still the same, don’t think you will say half of a word and AI will be able to generate 100% exactly on what you are thinking.

Discovering is the bottleneck; discovery is a journey. Discovery is about back and forth, about experiments, it’s about hard work, tons of interactions and thinking, experiments, mistakes, and bets. It’s not an answer that is already done or a problem that is already solved, and we just need the LLM to do it. IF that were the case, engineering would be dead a long time ago. If you actually pay attention to what’s going on, AI is working more as a search engine and auto-complete than Skynet and the revolution of the machines.

The reality is that people, especially management, were always obsessed with engineering productivity. I understand engineers are expensive; thanks to AI now, engineers are not the most expensive thing anymore :-). But seriously, IF you are obsessed with productivity, you probably take the AI hype the wrong way. Again, the question is not doing it fast, do it right, do it in a way that fixes the problems of the users and generates revenue for the company.

Fooled by AI

We also see all sorts of scams, like Rabbit, and people tricking LLMs, like the guy who bought a car for 1 USD. It does not stop there; Amazon is also dropping Go stores because it has an army of people in India watching videos and doing most of the work manually. Where there is hype, there is investment, and where there is money, there are scams. Let’s not forget Gemini telling a kid that C++ is too dangerous for him. LLMs are misleading customers, and the list goes on and on, and I'm pretty sure we will see plenty more. Let’s not forget the FAKE DEMOS like Devin and Sora. Hype is a way to sell and is an effective marketing tool. Remeber reality and hype are different things.

AI Tools Gold Rush

To some degree AI is like the internet early days. There is a huge gold rush, companies are trying to get into this wave and surf it. Not all products are good, and not all companies are serious. When the gold rush happens, companies have the advantage of selling mining tools. AI, we are seeing a crazy amount of tooling popping up. Some are good and useful, but not all.

I actually think copilots are cool and useful. There are lots of copilots out there right now, to quote a few: Github Copilot, Supermaven, AmazonQ, and many others. IMHO, Copilots are here to stay. Github copilot is good, however slow. There are security implications with copilots, but with enough care and due diligence, we can definitely use them safely.

Hallucinations

One thing LLM do is to hallucinate. They will provide an answer that will look right on the surface but might be completely wrong. Countless times I asked LLMs to generate Zig code, and I got Rust or C++ code back. I have seen many times copilots generating code that does not compile, is full of bugs, or is just wrong. So this are much better auto-complete tools that we used to have in our editors and IDEs but like I said cannot just get it right all times. IMHO, they are getting it right a lot and improving every day, but they are far from being perfect. For instance: AI Legal Research products hallucinate 17–33% of the time.

Insecure code

LLMs also generate code that is not secure, that has vulnerabilities. So you can't trust 100% on the code that is being generated. For a senior is completely fine because a senior engineer can make sense of things and know what todo or even ignore whats wrong and improve it. However, if you think about junior engineers, this can be very tricky because they are at the beginning of their careers and might not know the difference between right and wrong. So you can’t just give a copilot to them and not look to the code.

Copy Paste

For me, one of the worst things is that LLMs are not really well integrated into IDEs. And you need to copy and past most of the code, sure you can use auto complete. But usually is not as good and the chat. Now the problem with copy and past is that usually, people dont think. For decades I’m fighting the copy and paste culture, where engineers need to understand what they are doing. They can’t just copy and paste and not understand. Because this will create a bad code base full of anti-patterns and technicall debt.

IF you using a copilot to speed up things, great. IF you you are not understand the code that is being generated that is a recipe for failure. We need to be understanding at all times.

Less Refactoring, more anti-patterns, faster

Here is my biggest fear. IF you are in a culture of deliver not matter the consequences. AI can again put a lot of gasoline on the fire. Because if people are not paying attention to details, we will be putting poison on the system just much faster. Meaning that we can introduce anti-patterns and technical debts very, very fast. Don’t want to take my word? check out this research.

Slow DevEx

Here is something for us to think about. We code faster with LLMs and Copilots. But then we go to prod and if things dont work we will have more errors (faster). If we have more bugs and we need more time doing troubleshooting, are we going faster or slower? Perhaps the real problem is how we measure people and again obsession with productivity is not good. Don’t get me wrong, is always good to be able to deliver more, and there is not wrong with that. But if we want to speed up, we need to speed up learning and understanding first. Otherwise I wonder if we are not just making all slower. For two reasons, for the one I mentioned, the second is because we are waiting for LLM to answer :-)

LLM Architecture

Let’s address big problems now.

Training Cost

Training cost is huge. It cost millions of dollars and months. Not all companies will be able to run pre-training for LLMs. Because is very intensive process, cost a lot of money and takes time. Big tech companies are doing such training like 1–2x per year.

Data

This is a big problem. Some data is hard to have in high volumes. Synthetical data generation can help but is limited to what we know, if there is a pattern we dont know it can't help. Usually big tech companies use wikipedia and other big corpora of data also combined with books and papers but code is also being used to train LLMs. Github has a big advantage in that sense for ms copilot. However data is a important and limiting factor. We are running out of data.

Data has another problem, a lot of data out there is problematic. LLM needs data to be trained. For isntance: 51.24% of the samples from the 112,000 C programs contain vulnerabilities. Now thing about this, you will pre-train or fine-tune a model in your company, you will feed with you code. IF you code is well written great, but what if, you code is poorly written, full of anti-patterns and technical debt, what do you think you will be “teaching” the LLM model? The LLM model will replicate the anti-patterns because LLMs cannot really reason.

The problem with Fine Tunning

So if pre-trainign is too expensive, and data is limited how can we overcome this problems. There are a couple of routes like RaG or fine tunning. The problem with fine-tunning is that some papers already claiming that fine tunning makes the model forget original training and that makes pereformance drop considerably. So there are limits for fine-tuning.

Transformer's complexity and inefficiency

Gen ai uses a lot of power. Plus cost a lot of money and takes a long time. Clearly things are not scalable in the way they are and there are lots of ineficiencies that problems that need to be overcome. Transformers architecture is pretty complex and hard to understand.

Making Sense of AI

IMHO you need to be careful in putting AI in front of the end user for now. Enginering is a veryu safe bet to use AI, because if a angineer can review, it’s internal and avoids creating problems for the customers. AI outside of egineering needs to be evaluated with lots of caution and even concern for security, privacy and even expectations.

Google is adding AI pretty much in all products but I would argue in a pretty controlled way. Having a LLM chat bot in front of the user is where things can go wrong. Sure there are techniques like proxy, adding guarails and even sanitizing user requests, even sending to another LLM to check or sumarize the user request before coing to the core LLM.

We still can use AI and drive interesting benefits for the users. But keep in mind gen ai does not apply to all problems. Might be pretty good as internal search system and finding information (again search).

The Road Ahead

AI will disrupt a lot of industries including the technology industry but we dont need to worry about losing our jobs in the years, will take a long time. Since I started in the technology industry, I have heard people saying that engineering will be done and coding will be done. I remenber +20 years ago a teacher of mine telling me that he was always hearing that coding would end. Perhaps that will never end.

We still have a long road ahead with AI. Things could happen fast or take another 50 years for a significant improvement. Clearly the current archiutectures are not fast enought and not optimized enought. Everything is so resource intensive and very very complex. However the good news is that more and more we are seeing APIs. Hugging face is doing a great job in democratizing ai. LLMs tend to become comonitites and the real thing it would be able data, having the data and knowing how to use it, again the bottleneck is not productivity.

Hopefully, at some point, the architecture will get better more efficient, and less resource-intensive. Untl them it’s good to learn and keep exploring but with both feets on eath, grounded on reality and common sense.

Cheers,

Diego Pacheco

Originally published at http://diego-pacheco.blogspot.com on July 27, 2024.

--

--

Diego Pacheco

Brazilian, Software Architect, SWE(Java, Scala, Rust, Go) SOA & DevOps expert, Author. Working with EKS/K8S. diegopacheco.github.io (Opinions on my own)