Many tell us what to think. I ask my readers to be skeptical. Question me and others.

Artificial intelligence, Life and politics

The Ministry of Truth is in the making

Image by Gerd Altmann from Pixabay

On October 30, 2023, President Biden issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” I read it. Please click on the link and at least skim it.

Principal objections

The very purpose of the act is to “advance and govern the development and use of AI.” This raises fundamental questions about the purpose of the government, not in general but in the United States. The way the president outlines the administration’s role follows the European tradition rooted in the era of Enlightenment and still practiced in continental Europe; France is a good example.

The Enlightenment’s concept was about wise rulers governing their subjects. The first Americans went further; they wanted to be free to pursue happiness. The government should not be telling them what happiness is. They did not want the politicians to “advance and govern the development and use of” whatever adventures they found worth pursuing.

The first Americans did not expect whatever they invented to be safe and secure. They knew that life was a very hazardous venture and that there was no reward without taking a risk. The government should protect them from villains.

They would disagree with President Biden, who justifies his executive order by claiming that “the Federal Government should lead the way to global societal, economic, and technological progress.” In their view, the government should follow the progress because the strength of the United States is not in what the government does but what Americans do on their own.

Can AI be dangerous?

Yes.

I am old enough to remember similar warnings about the upcoming massive computerization and implementation of new electronics half a century ago. Many things changed; the world did not collapse. Despite all the complaining, we do better.

One can look back to prehistoric times when the first people tamed horses. In the approach presented by President Biden, it was a terrible invention because a bad character could rush to a remote place, rob the locals, and run away unpunished. Discovering iron was equally bad because, at first, it was used to make swords.

In the same thinking, during the industrial era, new machines taking jobs from people were an evil invention. In recent years, Uber destroyed the meticulously maintained taxi drivers system. President Biden might have in mind events like that when trying to be preemptive on the potential adverse effects of AI implementation.

Can the government deliver what President Biden is promising?

Unlikely.

We do not know how AI will affect us in the next few years. We can only speculate about the lasting effects that the younger among us will face two or three decades down the road. Living through the era of computerization and studying it taught me that implementing new technologies is usually slower than predicted, and the lasting effects could be profound but not exactly as expected.

We need to be curious, cautious, and humble in approaching AI. By “we,” I mean our personal use, business improvement, and public policy. Some of us might better envision the possible lasting effects than others, but today, it is anybody’s guess who knows better. Unfortunately, we have no way to predict the future.

After the release of ChatGPT, controversial lawsuits from creators popped up. Critics see that as a misunderstanding of the very concept of AI, as operating within the sphere of common knowledge but doing it more efficiently than humans can. One can see it as a tool that the most industrious among us would use to access and process today’s chaotic avalanche of information more efficiently.

As a writer, I have no objection to AI incorporating my work into their databases as long as they spell my name correctly. But not all creators share my approach. Until these fundamental discrepancies and misunderstandings are resolved, any government interference in advancing artificial intelligence will hinder progress.

How does President Biden want to make AI safer and trustworthy?

President Biden does not worry about the unknown. He wants to act fast. In the detailed directives to all major federal agencies, he ordered the creation of chief artificial intelligence officers positions. One may wonder what kind of qualifications that candidates should have and where to find them.

So far, the experts are in the technical field, where development occurs. Some of them warned the public about potential social implications. Still, very few people in social sciences understand AI technology well enough to advise us. We have AI specialists who are just that, not experts on social sciences, but some of them speak with the authority of philosophers. In the early stage of implementation, we have a lot of speculations about possible implications but very little solid data.

It means we can wish President Biden good luck in finding competent people for those new positions. Knowing how things usually go, those will likely be politically guided nominations. So the final outcome will be that we will have a bunch of bureaucrats with the impressive title of chief artificial intelligence officer, whose only qualification will be having that title. These will be the people shaping the future of AI.

AI is not our biggest problem nor a solution to our problems

In the text of his executive order, President Biden points out how AI should fit into his strategy for dealing with our troubles. This gives an optimistic twist to the document because, without stating that, the president implies that by government managing the progress of AI, we have a better shot at overcoming our problems. It is exactly the opposite.

I sighed, oh no, when reading: “From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.” It implies that without government oversight, AI could be harmful. I found examples proving otherwise when collecting data for my writings about AI and health care.

In my earlier article, I wrote about an AI application that could offer early detection of potentially deadly infections, but hospitals are not interested because, under our health care policy, they are not financially motivated to keep people healthy but to provide as many expensive treatments as possible.

If AI can reason as a person (so far, it cannot do that yet), it could soon point out that our illogical health care policy is a problem, and unless we fix that, not only AI will not help, even God cannot help us.

Applying cold logic, AI might conclude that we have a political stalemate and so many unresolved issues not because of a lack of good regulations but as a result of too many bad ones. Hence, a peculiar thought comes to mind that the bureaucrats are eager to put their hands on AI implementation to confine its potential big mouth before it speaks.

Earlier this year, the White House Office of Science and Technology asked for public opinions about intended AI regulation. In the submitted article “Can AI be playing Socrates?” I argued that unbiased AI could help us overcome our ideological divides. I warned that the establishment would have a stake in curbing AI’s abilities to prove some people wrong. The president’s executive order has no trace of my hopes and worries.

The government will kill AI

I am among those, not too many, who use AI daily, mostly in my writings. It is a wonderful tool. But I see its limitations. By keeping my old-fashioned human intelligence up to date, there is nothing I need to worry about. I hear the same from other early adopters. For the rest  ̶  I guess it would be a majority  ̶  as people say, fear doubles all.

In his executive order, President Biden acted to please that majority. He did not advise his fellow compatriots to advance their knowledge to prepare for the new challenges that the implementation of AI will bring. He promised what no one can deliver: to protect them from ignorance; the government will learn so they do not need to.

Let us speculate what would have happened in the 1980s if Steve Jobs and Bill Gates had warned the government about the changes that the massive use of personal computers could bring. Acting on the warning, the government might have hired renowned experts from IBM and Xerox and killed Apple and Microsoft before they had achieved any momentum.

If Peter Thiel had cautioned the government about changes in the payment system that PayPal would bring, the experts from the banking industry might have convinced the government to kill PayPal, too. If the government had any say in launching Google, Amazon, Facebook, Twitter, or a smartphone, it would likely find many safety and security objectives impeding or killing completely new ideas.

Not by accident, so many life-changing ventures have started in the U.S. Entrepreneurs here have much more freedom than elsewhere. But with initiatives like his executive order, President Biden tries to restrict it.

It smells like collusion

So, why do leaders of major AI-developing companies ask for government involvement? If I were one of them, I would do the same. In the past few decades, regulations have encroached deeply into business operations. As AI will affect many aspects of our lives, it is unavoidable that some rules will be needed. With the unpredictability of politics, companies risk major losses if, for strange reasons, some potentially valuable products or services are banned or restricted.

The negative aspect of that method is that after getting legislative guidance, ventures reorient their objectives. Without regulations, they would try to maximize profit by offering a viable product or service that the market needs. Some will try to cheat, but their competitors will warn the customers.

With regulations intended to protect the public, businesses aim to maximize profit without breaking the rules. It could mean offering inferior products or services. Also, cheaters will find ways to circumvent the law or pay lobbyists to insert loopholes into even more restrictive regulations. It is called a collusion of business and government.

Capitalism has a bad reputation primarily because of those backroom deals between the business and the government. Regardless of how many regulations the government issues, it never seems enough to protect the public from the greed of dishonest capitalists.

1984 is closer than ever

AI is the first major human invention that might revolutionize how we reason. If, from its inception, the government works hand in hand with the technology designers, we face a risk that they will determine between themselves what is the truth. The business world has the means; the government has the power. Acting in concert, they can arrange a system telling us only what they want us to know and blocking everything else. It will be exactly The Ministry of Truth from “1984” by George Orwell.

AI has the potential to expand our freedom of expression, and many industry leaders promise to work toward that but, at the same time, turn to the government for regulation. Elon Musk is a good example. His voice is more important than others because he owns a major media company.

In America and worldwide, the untampered advance of AI is the most beneficial for society. In the U.S., we have a well-established tradition of courts determining if new developments are dangerous in the meaning of fundamental constitutional values. It protects us against unexpected or unknown evils as they appear. Any well-intended preventive legislation or executive order brings us closer than ever to the totalitarian system of “1984.”

Leave a Reply

Your email address will not be published. Required fields are marked *





Testing 5

Artificial intelligence, Media
  ·