Naming things marks a fundamental task of human intelligence. Naming new phenomena, for “marketing” purposes, matters a good deal to successful entrepreneurs. “Hype” is not uncommon. Accuracy is rarer.
By contrast, when scientists name new discoveries, they tend to take a stricter approach (at least they once did, “back in the day”). Accuracy mattered more to the scientist than it typically matters to a commercial seller. For people who hope to think clearly about reality, accuracy continues to matter a good fair amount – well, totally.
With this in mind, we might name “AI” more accurately, or if it is too late to rename it, at least we might think about it differently, we might think about it more accurately perhaps. For calling the invention, (if that is what it is), “artificial intelligence” (“AI”), or “artificial general intelligence” (“AGI”), propagates a deeply damaging BIG LIE, which has now been widely foisted upon an unsuspecting world, largely unnoticed.
What Intelligence Is
The word “intelligence” means something important to understand. It is not that easy to define intelligence, even scientifically. When scientists attempted to devise gauges and testing to measure human intelligence accurately and truly, they did their best, but bias, as things turned out, got in the way, and materially so.
“Intelligence Quotient” (“IQ”) tests turned out to be famously biased in favor of certain individuals from certain segments of the larger society, who happened to share a very particular cultural background. As a scientific measure, therefore, “IQ” ended up leaving quite a lot to be desired. (The literature is vast.)
In this our current age, no one has scrupled very much to call the algorithmic, probabilistic and combinatorial operations of a computer program designed to mimic human language (also known as a “large language model” (“LLM”) – such as “Chat GPT” or “Open AI” or the wayward “Grok” (to name three) – by the moniker “artificial intelligence.” But in fact, huge problems attend the designation, and they contribute materially to quite a bit of dishonesty, chicanery and deceit in the “marketplace of ideas.”
The misnomer (“artificial intelligence”) encourages confidence (falsely) in the reliability of “AI” and its results, as well as the propriety of its applications. The uncritical use of “AI” outputs leads often, either deliberately or unintentionally, to “sloppy thinking,” with highly damaging consequences for our civic life and civil society.
Costs of Unintended Consequences
For example, the uncontrolled, unchallenged, unregulated and gratuitous plagiarism (and outright theft) of “intellectual property” contributed mightily to the demise of “free press” organs, and the service they provided.
Once abundant, prosperous and widely available, daily newspapers – edited by professionals trained in journalistic arts and ethics involving truth, honesty and accuracy – had contributed widely and significantly to the shared sense of objective reality needed, if democratic forms of government are to function well and work effectively in promoting the common good.
Ever since the destruction of the economic model that made daily newspapers viable enterprises, no functional replacement has emerged. In their place, a “wild west” of “uncurated content” pollutes cyberspace (and the society at large) with an apparent intent to attract “eyeballs” and keep them “engaged.”
As the practitioners and owners of “social media enterprises” have discovered, rage and nonsense attract and hold audiences more profitably (to media owners) than truth and accurate reporting of facts and reality.
An “upgrade” this was not.
For another example, broad and deep violations of personal privacy enabled electoral manipulations that were formerly impossible at such scale. By targeting individual voters through intrusive profiling, “bad faith actors” deceived them. This secretive practice corrupted honest public discourse, replacing it with skewed and targeted manipulations of individual voters. Such manipulations, tailored to personal and private biases, passed into the public domain without notice by voters.
Along with the corrupting influence of unregulated and secret “big money” into our elections – which a partisan and corrupted majority of the Supreme Court engineered in the 2010 ruling “Citizens United” – the sinister and secret manipulations of the public discourse corrupted the electoral process, leading to a national government today of unprecedented corruption in its open bribery, extortion, and pecuniary malfeasance.
“AI” may not have caused the destruction of American democracy (so far), but it has contributed its part.
In the absence of a robust legal framework to prevent such abuses of technology, violations of privacy and thefts of intellectual property such as these likely affected the outcome of at least two U.S. presidential elections (2016 and 2024).
Moreover, the lack of transparency in all of this manipulation and sophisticated chicanery enabled grave electoral interference not only by domestic “bad faith actors,” but also by international adversaries and hostile foreign powers, who favored one particular candidate especially over any another.
Those who think such “opportunities” to skew elections improved our civic life and civil society should spend a day carefully reading a newspaper, if they can find one.
What Does “AI” Do?
None of the myriad evils associated with the unregulated use and abuse of this technology can or should be considered the “fault” of “AI.” The thing is a mere tool.
People commit murder, not guns. But the failure to regulate tools, the failure to restrict their availability to irresponsible or criminal elements in the society, the failure to identify, legislate against, and punish criminal abuse of powerful technologies, all such social and governmental failures surely do qualify as culpable.
If a murderer used a hammer to slay his victim, the hammer bears no blame. But a law that effectively exempted (somehow) any hammer from being considered a potential murder weapon, or the lack of any law enabling the prosecution of “murder by hammer,” would certainly be unwise.
If a cheat used a computer to commit fraud, the computer bears no blame. But a law that effectively exempts any computer from being an instrument of fraud, or the lack of any law enabling the prosecution of gross “electoral fraud by computer,” would certainly be unwise.
“AI” is no more and no less than a mechanical process, engineered by human minds, a process employed by humans using machines, often in order to save time and money. Before “adding machines” were invented, large staffs of needy clerks worked at the tedious job of adding numbers and checking the accuracy of gargantuan sums.
The abacus was an ancient device designed to assist such tasks, using strings of beads. No one ever thought an abacus was intelligent. The people inventing it and the people using it surely might be intelligent or lazy or honest or deceitful, but the thing itself was a device, not a spirit, not a mind, not an entity with anything like “agency.”
And you can bet that ancient fraudsters used the abacus to defraud their fellows.
Many decades ago, when computers were new, a seventh-grade teacher explained the principle behind the devices elegantly enough: a computer knows only two things, “on” and “off,” he said, but the trick of computers is the binary number system and the combined power of electronic speed.
For these last many years and decades since computers were invented, people sensibly understood that the “outputs” of computer processing programs depended materially on the quality and honesty of the “inputs.” A handy acronym summarized the principle: “GIGO,” which stands for “garbage-in-garbage-out.”
Building an effective legal framework for “AI” would not aim to regulate the technology so much as it would regulate the people who design, produce, use and should take responsibility for the use of the technology. What we lack with “AI” today is what we need most (and miss most) in our society at this moment: legal accountability.
Tools, Not Agents
At this late stage of the “Information Age” (the “AI revolution”), the thinking world, made up of humans, really ought to talk and think more precisely about what “AI” can and cannot do. Accuracy compels us to recognize the technology as a mechanical thing, nothing at all like a living, breathing life-form. It is not a plant or an animal, to say nothing of a conscious, sentient human, fully or partially aware of self and surroundings, endowed with agency, and therefore accountable for actions.
“AI” and the devices performing its operations entered the world as tools. They are mechanical things, without feeling or “sense” (in any of the word’s several meanings). “AI” devices may record sights and sounds, they may transmit them, they may process the data representing them, but they are pieces equipment. They are gear.
It is mere sloppy “short-hand” to say (or to think) that “AI” devices see or hear anything ever. What they do, whatever it is, (which, by the way, we barely understand), remains qualitatively different from genuine human thought. Likely, this will remain true, unless we humans degrade ourselves to the state of being mere mechanical devices.
“AI” is a tool, as a hammer or an adding machine is a tool. We have no need to fear a hammer or an adding machine. But we have every reason to fear a murderer who uses a hammer as a weapon, or a cheat who uses an adding machine (or computers) as critical tools in dishonest schemes, in deceitful “confidence games.”
We ought to have a set of laws about the uses of “AI,” defining what is permissible and impermissible. And laws need enforcing, if they are to be effective.
Regulating Power Tools
Properly speaking, “AI” is “power tool.” Chain saws, table saws, electric drills, nail guns, laser surveying instruments are all power tools that enormously enhance human productivity. They transcend human limitations of muscular strength and endurance.
Power equipment does the same. Diesel-powered excavators, bulldozers, tractors, front-end loaders and cranes move materials on a massive scale that daunts the capacities of manpower or horsepower. All this power equipment can benefit mankind.
But operating power tools and power equipment requires attention to safety and competent expertise. With great power comes great capacity to inflict serious injury and damage. Injury or damage may be inadvertent and unintentional or malicious and purposeful. Malign actors operate everywhere, unfortunately and sadly enough.
It is foolish to ignore the risks associated with powerful tools.
And so, we do not. Manufacturing and maintenance of tools and equipment must meet standards. Competent operators must meet standards. Substandard equipment or substandard operation of equipment routinely remains subject to legal liability for the consequences of negligence or malfeasance. Such standards must be enforced.
All of this is common practice, and common sense.
But somehow, when it came to the power-enhancing invention of digital processing, the whole society slept through the part where legal frameworks and safety regulation normally would find their place.
Normally, it would occur to the minds of practical, everyday humans, working and striving to thrive in a too often hazardous world, that new devices need new regulations and sound laws governing their licit and illicit use. But this did not happen for “AI.” Why?
Needing Adult Supervision
What is still more alarming, we do not expect our electric drills and saws to solve the design problems affecting our building projects. Diesel-powered excavators and bulldozers hugely facilitate the civil work needed to build strong foundations, but we do not delegate the task of designing and planning structural foundations to mere machines.
It requires human minds, trained and informed, educated and experienced, to integrate all of the aspects of good design – not merely material factors, but moral, cultural, practical, humanistic aspects of building congenial spaces for humans to occupy, where they can gather and thrive. A flourishing human society expresses its values in everything the community undertakes for the common good.
“AI” can help with complicated calculations. It can “brainstorm” alternative approaches. Sophisticated programs can “build” virtual representations for architects and engineers to review, looking within the program at different angles and perspectives that may not be apparent from two-dimensional drawings. “AI” can be an extremely helpful tool, a critical component in contributing to good designs.
But “AI” has no human judgment. It has no ethics. It has no morality. It has no real experience or expertise beyond whatever data may be fed into it by humans. “AI” is an instrument of powerful mathematics, not a substitute for thinking.
It takes thinking and judgment, trial and error, learning and continuous improvement, to implement solid design practice. We humans are more than abstract ideas conceived as “populations” or “interest groups” or “identities.”
The English word “idiot” derives from an ancient Greek word idiótes, which means a “private individual” as opposed to polítes, which means an “individual citizen.” “AI” does not even qualify as a genuine human “idiot.” Real people have real needs. Functional governments serve them, using and regulating tools for the common good.



A.I. and what I want to read....
I still make an attempt to trust the Associated Press, (even though Google is going to try and edit what I write).... it's the big factor that is still evident in newspapers.
When I scroll through "news" headlines on-line, mostly what I get is what you're calling "an attempt to attract eyeballs, and keep them engaged". That's exactly right.
It's like going through the old conventional check out lines at the supermarket.... impatiently waiting your turn, and reading the headlines of the tabloids.... knowing full well, there's no "legitimate" story behind the 72 point font headline on the "Enquirer" beside you.
But it's so tempting.... Just by looking, I lose more brain cells, than if I had sucked down 8 shots of Tequila.