It suddenly arrived and promises to stay, but for now artificial intelligence is nothing more than an information agglutinator capable of relating patterns to formulate an answer to a problem specified by the user not. Yes, it is a powerful and dangerous tool, but we give it too much credit.
It’s easy to be amazed by such a curse word: “artificial intelligence” – whether due to cinematography and fiction or simple ignorance. However, it is also easy to demystify what looks like a labyrinth. In a primitive way, what happened is that we used all the information available on the Internet to teach a robot, which uses a linguistic model that resembles that of a human and is capable of to relate this information and present it in an acceptable manner. So far so good. The problem arises when the answer does not exist in the information set used to teach this robot. The quality of the artificial intelligence model is limited by the amount and quality of information available to feed this model. Therefore, we cannot rely on the robot to solve all problems.
The lifespan of this technology still seems very short, but since the introduction of ChatGPT, the rate at which we are bombarded with news in this area has increased exponentially. Companies in the area were forced to offer their solutions at the risk of being left behind; others have acquired large collections of information to feed their models. The race is heating up, we are speeding towards a goal we have not yet managed to realize. We try to stay close to the pack for fear of a breakaway turning into a big win. This is the engine that fueled the subject of artificial intelligence, the fear of losing. In the trenches a different reality is observed, the goal is not as promising as one might think.
Anyone who believes that a programmer’s job is to program is wrong. It will be too, but it is much more than that, it involves planning, discussing, deciding, reviewing, going back on decisions because the requirements have changed, understanding these requirements, often taken from a client who does not know what he wants not. There are many tasks that an artificial intelligence tool will not do. Applying the same principle to other areas, it is clear that the traditional person still has a role to play when it comes to so-called white-collar occupations.
It is clear that companies want to adopt this new technology, but it is also clear that they cannot afford to rely on an external tool without proof of profitability. At the moment, efforts are being made to find a compromise: how to introduce this tool without interrupting value delivery? And who should have access to such a tool? Do we only hand it over to the most experienced and waste its potential, or do we leave it to anyone and eliminate the entire learning curve and risk creating solutions without criteria and with errors?
This is the companies’ dilemma, it has to do with trust in the tools that emerge. For these to be more efficient, it is necessary to give them a lot of information, and even then it is possible that we get a wrong answer, said with all the arrogance of a machine. Do we really want to hand over the secret of Pastéis de Belém to a machine? If we trust her with this information, then she will be able to produce it, but will anyone using this tool also have the same knowledge? Doubt is enough to hinder progress, the risk seems too high.
Caution is called for, enthusiasm is high and fear compels us to explore this new paradigm, but time is needed. In the same way that we learn to use the Internet, we will learn to use artificial intelligence and in the same way that we analyze the results of a survey, we will also need to analyze the answers of an artificial intelligence. We will be faster to program or design, but the experience to distrust and fix is still needed. So calm down. We will not be replaced for now.