AI. It is the next big tech hype. AI stands for Artificial Intelligence, which these days mostly comes in the form of Large Language Models (LLMs). The most popular one these days seems to be ChatGPT from OpenAI, although there are many others already being widely used as well.
Like with a lot of previous tech hypes, a lot of people have been jumping on the hype train. And I can’t really blame them either. It sounds cool, no, it is cool to play with such tech. Technology does a lot of cool things and can make our life a lot easier. I can not deny that.
But as with a lot of previous tech hypes, we don’t really stand still to think about why we would want to use the tech. And also, why would we not want to use it. Blockchain is a really cool technology which has its use, but when it became a hype a lot of people/companies starting using it for use cases where it really wasn’t necessary. Despite all the criticism on the energy usage of cryptocurrency, still on a regular basis new currencies are started. And while the concept behind cryptocurrency is really good, the downsides are ignored. And the same happens now with AI.
The downsides we like to ignore
There has already been a lot of criticism of using AI/LLMs. I probably won’t be sharing anything new. But I’d like to summarize for you some reasons why we should really be careful with using AI.
Copyright
The major players in the AI world right now have been trained on any data they could find. This includes copyrighted material. There is a good chance that it has been trained on your blog, my blog, newspaper material, even full e-books. When using an image generation AI, there’s a good chance it was trained with material by artists, designers, and other creators that have not been paid for the usage of that material, nor have they given permission for their material to be used. And to make it even worse, nobody is attributed. Which I understand, because when you combine a lot of sources to generate something new, it’s hard to attribute who were the original sources. But they are taking a lot, and then earning money on it.
Misinformation
Because of the way the big players scraped basically all information from the Internet and other sources, they’ve also been scraping pure nonsense. When the input is inaccurate, the output will be as well. There’s been tons of examples on social media and in articles about inaccuracies, and even when you confront the AI with incorrectness, they’ll come up with more nonsense.
In a world where misinformation is already a big issue, where science is no longer seen as one of the most accurate sources of information, but rather people rely on information from social media or “what their gut tells them”, we really don’t need another source of misinformation.
Destroying our world
I am sorry to say this, but AI is destroying our world. Datacenters for AI are using an incredible amount of power and with that, are both causing an energy crisis but also causing a lot of emissions. And there is more, because it’s not just the power. It’s also the resources that are needed to make all the servers that run the AI, the fuel that is needed to get the fuel for the backup generators to the datacenters, the amount of water being used by datacenters, and the list goes on. Watch this talk from CCC for a bit more context.
There have been claims that AI would solve the climate crisis, but one can wonder if this is true. Besides, we’re a bit late now. Perhaps if we have made this energy investment some decades ago, it might’ve been a wise investment. We’re in the middle of a climate crisis and we should really currently focus on lowering emissions. There is not really room for investments at this point.
If we’re even working on reviving old nuclear power plants simply to power AI datacenters, something is terribly wrong. Especially when it’s a power plant that almost caused a major nuclear disaster. And while nuclear power is often seen as a clean power solution, that opinion is highly contested due to the need for uranium (which has to be mined in a polluting process) and the nuclear waste is also a big problem. Not to speak of the problems in case of a disaster.
Abuse
One thing I had not seen before until I started digging into this is the abuse of cheap labor. This does make me wonder: How many other big players in AI do this? It is hard to find out, of course, but this is something we should at least weigh into the decision whether to use AI.
So should we stay away from AI?
It is easy to just say yes to that question, but that would be stupid. Because AI does have advantages. Certainly, there is nuance in this decision. Because there are advantages.
AI can do things much faster
A good example of AI doing things a lot faster is the way AI has been used in medicine. A trial in Utrecht, for instance, found that medical personnel spend a lot less time on analysis, experience their work as more enjoyable, and the cost goes down as well. And it gets even better, because there are also AI tools that can even predict who gets cancer. These specialized tools trained for specifically this purpose can only be seen as an asset to the field of medicine.
Productivity increases
Several people I’ve talked to who use AI in their work as software developers have mentioned the productivity boost they get from their respective tools. Although most do not think you should let AI generate code (as there have been a lot of issues with AI-generated code so far), it can be used for other things such as debugging but also summarizing documentation or even writing documentation. The amount of time you have to invest in checking the output of an AI is a lot less than actually having to do the work yourself.
I do want to add a bit of nuance to this, however. The constant focus on productivity is, in my very humble opinion, one of the biggest downsides of (late stage) capitalism, where “good things take time” becomes less important. From a company point of view, this makes sense: If you can lower the cost of work by paying a relatively small amount of money for tools that increase productivity, that makes the financial situation of the company better. However, these costs never include the cost to the environment. If more companies would make more ethical decisions when it comes to the resources they use, this would make the decision process very different.
Less repetition and human errors
This goes for any form of automation of course: When you automate repetetive tasks, your work gets more enjoyable. Also, potentially related to this, you’re less prone to making errors. AI can do that automation. I recently saw a news item of how a medical insurance company in The Netherlands uses AI to “pre-process” international claims. international claims don’t follow their normal standards, so they used to have to manually process all those claims. Now, those claims go through an AI that already analyses the claim, tries to identify all important data, so that the people handling the claims only have to check whether the AI identified it correctly. After that, they can focus on handling the claim. This reduces a lot of human errors and repetition of boring work.
So now what?
Of course everyone has to come to their own conclusion on what to do with this. And there’s a lot more resources out there with information to consider. I have come to some conclusions for myself. Let me share those for your consideration.
Specialized AI over generic AI
AI can be really useful. Think of the medical examples I mentioned above. Because of the specialization, those systems have a lot lower energy consumption for training and running purposes. The main problem with generic AI is that because it is unclear what it should do, it is trained to do and understand everything. A lot of the training time needed might never actually be used.
While for me the climate crisis is the most important reason to be critical of AI, I can also not ignore the copyright issues, and the problems with misinformation. AI may seem smart, but is actually quite dumb. It will not realize when it says something stupid. So with a high energy (and other pollution) investment, you get relatively unreliable results. And that’s without even mentioning potential bias. The model needs to be trained, but someone decides on which data it is getting trained.
Experimentation is worth it
Just like with any other technology, it is worth experimenting with AI. Last year I did experiment (and yes, I did experiment with ChatGPT) to get an idea of how AI can help. And while I found the usage helpful, I came to the conclusion that most of what it did for me at that point was make my life slightly easier. It did add value, but not enough for me to justify the problems I described above.
What I did not experiment with, though, but where I do see a potential, is in small, energy-efficient models that you can run locally. But that again also comes back to my previous point: Those can be trained only for a specific purpose.
Is there another option?
One of the most important questions everyone considering AI should ask themselves is: Are there no other options?
One thing I’ve seen happen a lot lately is the everyone is jumping on the AI bandwagon, so we must also do that! without thinking about the why? of implementing AI. A lot of applications of AI that I have seen have been to cover up other flaws in software. Instead of using AI to find data in your spaghetti database, you could also implement a better search functionality using tools that use less energy and that have been made specifically for search. ElasticSearch, Algolia and MeiliSearch come to mind, for instance. Some of those have been implementing some AI as well, but again: that is very specific and specialized.
I’ve also heard people say “task x is hard right now to accomplish and AI can help”. In some situations, sure, AI is a good solution. But in a lot of situations, it might be your own application that just needs improving. Talk to your UX-er and/or designer and see how you can improve on your software.
An important topic to factor into your considerations on whether to use AI or not should be what is the impact on the rest of the world? You’ve read some of the downsides now, factor those into your decision on whether you need AI.
Long story short: Always be critical and never ever implement AI just because everyone is doing it.
Concluding
Please, do not start using AI for everything you need. Be very critical in every situation where the idea comes up to implement AI. If you do, ensure you keep in mind not just the cost to your company, but also the cost to the rest of the world. When using AI, focus on specialized and optimized versions, preferably those that can be run locally on energy-efficient computers. And always, but really… always be critical of AI input and output.