Pausing AI research to evaluate its value for humanity

Recently several hundred top tech leaders, scientists, and thought leaders signed an open letter calling for a pause in Artificial Intelligence (AI) research for at least six months to ensure it serves humanity. The letter stated that AI can pose “profound risks to society and humanity.” It also stated that “planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The letter was signed by well-known personalities in the field of tech like Elon Musk, Steve Wozniak, and others. 

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the letter stated. It also commented that AI is now competitive at general human tasks.

The letter pointedly asks: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” 

The letter said that the answers to these questions must not be delegated to “unelected tech leaders.” It said that the development of more powerful AI should only be done if “we are confident that their effects will be positive and their risks will be manageable.”

It is actually fortunate that this call came at this time. While some of us may become amused and amazed at what AI can do, it has also made certain jobs and professions obsolete. Although the mantra to justify it is along the lines of increasing your productivity, in reality it can also be that the human is no longer needed in the loop.

That is the question. If the human in the loop is the weak link, what will AI do about it? One not so distant example was the B737 MAX set of crashes that happened when the human pilots could no longer override the computer software that was flying the plane. Humans of course have a primal instinct for self-preservation. Although we have also seen instances where pilots deliberately crashed their airplanes (namely during 9-11), the question is whether an AI has that same sense of wanting to preserve the weakest link in the chain.

Science fiction films are full of examples. Many people can remember The Matrix and Terminator as examples of AI gone berserk. Prior to Chat GPT, we didn’t really think of these movie scenarios as realistic, even if driverless cars were already on the road. But the way that Chat GPT has taken over the world with almost human like answers is showing some possibilities, and also potential disasters. The adoption from zero to 100M users in the span of a few weeks boggles the mind and is every marketer’s envy.

Our culture is filled with sayings like “brother’s keeper,” “do unto others what you want done to you,” and similar ones. Our laws and religions are designed to keep us from harming and killing each other. Consultants make a lot of money developing team dynamics. Words like “sympathy” and “empathy” are part of our vocabulary. This is because for humanity to survive, we need to take care of and care for each other.

AI may seem like human but it is not. It decides based on the logic it derives from training models. Unless we develop AI like children to have certain boundaries and to work for the benefit of humans, it will simply view humans as a source of inputs and a destination for their outputs. Unless the protection and welfare of humans is part of their DNA (figuratively), then making a logical decision to eliminate or marginalize us is of no consequence to them.

Doctors have a code of “first do no harm.” Perhaps it is time for AI to adopt a similar code, lest we end up with real life versions of The Terminator or The Matrix within our midst.