Is ‘Artificial Intelligence’ Past Its Prime?
Why I believe the time has come to Istop calling it AI
In recent years, “Artificial Intelligence” (AI) has become a misleading, dubious, outdated, divisive marketing term that appeals more to marketing experts, doom-mongers, and unicorn billionaires than actual computer scientists.
Beyond the roots of AI
I have been writing a lot about Artificial Intelligence, and it’s clear to me how this is a rather divisive topic.
Even if some stand by it and fiercely deny that Artificial Intelligence is nothing but snake oil propaganda, those who are more informed about these matters have long been trying to find a better term.
I have heard people speaking about “Artificial Sapience,” “Synthetic Neural Networks,” “Cognitive Algorithms,” “Machine Intelligence,” “Algorithmic Intelligence,” etc. The list is long, and every day it grows in numbers.
However, most of these share a common problem, as they are rooted in the same linguistic and epistemological primordial fallacy coined back in the mid-1950s.
Evgeny Morozov, in a rather insightful article, notes how the term AI “belongs to the same scrapheap of history that includes ‘iron curtain,’ ‘domino theory,’ and ‘Sputnik moment’” and concludes:
In reality, what we call “artificial intelligence” today is neither artificial nor intelligent.
I fully agree when Morozov says that nowadays AI is more dependent on the human artifex than ever before. Arguing about the second term “Intelligence,” the author explains how modern AI’s strength lies in pattern-matching.
I also subscribe to the idea that intelligence is more than a guessing game, and even algorithm-based neural networks can emulate the work done by certain parts of the human brain, but what they can’t do is function in a systemic way as a whole like the brain does.
The Future of AI
In May, I attended the third and last AR event of the series “Past, Present and Future of AI” at the Champalimaud Foundation.
Hosted by neuroscientists Sabine Renninger and Scott Rennie, the discussion centered on the Future of AI, and the lively discussion compared the nature of human/animal intelligence with “Artificial Intelligence.”
For me, the key takeaway was that it’s up to us humans to decide what this technology will become. We alone have the power to choose if we give it away to the dark side or use it for the common good.
While AI is being used to detect and target dangerous asteroids, it is also being used “to define targets in the ongoing destruction of Gaza” — Scott Rennie.
About the main topic of “Shaping Tomorrow’s Intelligence,” Luís Correia, a professor at the Department of Informatics of Faculdade de Ciências of Universidade de Lisboa, noted how humans “need to choose seriously, and not just for fun, what they want from AI.”
As to AI versus natural intelligence, Correia stated that while the first was recently developed and is limited to symbolic reasoning, the second took billions of years of evolution and the use of sensorimotor capabilities to get where it is today. “All forms of natural intelligence are embodied, while AI is not”, he added.
In my opinion, Correia shared one of the best definitions regarding this technology:
“AI is a tool that empowers humans to solve big problems faster.”
Rebranding AI
I came out of that conference with more questions than answers, as it should be. For days, I kept thinking about what I had heard about the limitations, « of what we call AI.
I couldn’t go on using the term “Artificial Intelligence.” Hence, I put my human brain to work and came up with a couple of alternatives.
“Machine Trained Inference” seemed like a good candidate, as it encapsulated two core aspects of AI: “Machine Learning” and “Inference,” the ability of AI to make predictions or classifications based on learned information.
However, even if MTI could work as a more accurate and comprehensive term, it still failed to cover all the angles. Apparently, the easiest way out of the maze would be to abandon the generic term AI and start referring to each of the specific areas independently.
Personally, considering the final words shared by Professor Correia at the Roots of AI conference:
“We have to look at AI as a set of possibilities of new forms of embodiment,”
For now, I’ll call it ALF (Algorithmic Learning Frame) after my favorite TV character from the ’80s. And who knows, maybe one day we’ll call it “Algorithmic Learning Form.”
Thanks for the gift of your precious time. If you’d like to read more on the topic, please consider picking up a new thread:
Why I Believe AI Is the Biggest Lie Ever and We’re Buying It
During a gold rush, sell shovels.
AI-Generated Writing is Fool’s Gold
I’ve looked the ‘AI serpent’ in the eye but didn’t bite the apple




Plagiarism Machine, perhaps?