I was queried this weekend by former classmates and friends for 55 years related to NYT’s Maureen Dowd on AI. Non-techies are concerned, enough to poke this retired technology analyst. I suspect it was Elon Musk’s cri de coeur that “The apocalypse could come along at any moment” that raised my friend’s hackles. Let’s think this through as outsiders.
Full Disclosure: I am no expert on this subject, have deliberately not invested in nor studied how AI actually works in depth. I view Generative AI (GAI), 2023’s hot topic and investment, as at the toddler stage: cute, amazing at times, subject to tantrums, and not to be trusted with scissors let alone the family automobile.
Let’s look at three classes of AI.
- Creative AI
Creative AI is here now, can be a lot of fun, and is driving human content creators absolutely bonkers.
Open AI’s less known project is Dall E. You enter a text description of the image you want and it creates your desire. Of course, Dall E was trained on images of every known publicly available image in the history of the world, so if you want to mix and match Magritte and Picasso, have at it. This illustrates a major societal issue of many AI products: if data is accessible on the Internet, can an AI product create infinite derivatives without compensation?
Living artists whose works were appropriated into DALL E or other Creative AI programs receive nothing for their copyrighted works of art. Likewise, book authors and movie makers are apoplectic about the appropriation of their content (and style of creation) without compensation. This was a grievance for the recent Hollywood strike. There will be property rights litigation up to the Supreme Court. The money at stake is too great to forfeit. Recall a decade ago when music streaming took off: The streamers ended up paying the music creators.
There is, of course, a dark side to creative AI and, as usual, pornography is the early adopter. Deepfake software uses an image to overlay part of another image. Choose your face and away you go to either stills or video. Doing that at home is not illegal for an adult. The problem lies with youth, social media, and the near 100% likelihood of widespread dissemination of images that were never real. The nightmare is it’s your teenage daughter’s image.
2. Machine Learning AI
There’s a lot of practical machine learning artificial intelligence (MLAI) in our lives today. For example, our Honda Accord has the ability in “adaptive cruise control” to match the pace of the car in front of it, slowing and even stopping without any driver input. This was programmed using MLAI: training millions of examples created into a model that is executed in real time, along with traditional programming orchestrating actions like braking.
Amazon Alexa and chat bots are other examples of MLAI. And Wall Street is awash with AI-driven “algorithms” that react to every tick of the tape and post on social media. If the Bloomberg feed says “the president is being rushed to the hospital…”, the algorithm will instantly, without human intervention “sell, sell, sell.”
The thing I look for in an MLAI is whether black swan events are sufficiently modelled, but that’s impossible for a consumer to do. Regulation is needed here.
Boeing’s 737MAX autopilot system was sold with a cost-reduced version that, among other deletions, used a single-sensor pitot tube that measured airspeed. All US jets were ordered with two pitot-tubes and sensors. When the pitot tube froze up with ice, there was no second sensor to do a data-reality test, so the AI-driven autopilot system thought the plane was entering a stall and dived the plane despite the frantic physical efforts of flight crews on two doomed foreign planes. This was a Black Swan event outside the training (e.g. MLAI), control software, and the inability to deal with the real-life situation of a rare event (i.e., the black swan) caused the loss of two aircraft, the humans aboard, and great harm to Boeing.
Tesla’s “self-driving” software option would also appear to me, based on accident reports, to not be sufficiently trained at this time to deal with real-world driving situations that a human would probably avoid most of the time.
My viewpoint is that human-threatening MLAI should be based as much as practical on a universal, open-source software kernel so humanity ends up with all the glitches fixed in one place. The Linux operating system is such a software system.
Note, however, that current discussions on AI and open source center around allowing smaller companies to compete in the AI space with the elephants in AI: Open AI, Microsoft, Google, Facebook. That’s a valid concern but a story for another day.
My viewpoint on MLAI threats is based on years of mission-critical software development including police and fire 911 systems, CAT scanners, and applications for fault-tolerant computers. In the self-driving example, there’s no reason for humans to be maimed so that Tesla, BMW, GM, Mercedes and every other car company learn from black swan oopsies; fix it once for everybody and compete on features or something else.
3. Generative AI
What is a Generative AI (GAI)?
A generative AI, the source of recent fear, hue, and cry, is a computer system that is taught human knowledge and skills. Using this knowledge, the GAI is typically also a human-machine that accepts human questions and directions (called prompts) and replies with machine-generated text or images. This is different from retrieved text or images we are accustomed to with a Google search. The AI generated replies are, like human understanding and communication, created on the fly (by colossal computer systems costing tens of millions of dollars). More depth here.
Hardware: First, let’s look at the hardware. GAI uses intense computing that has been optimized in the last couple of years by hardware assists at the chip level. Different hardware intensity is needed at the training stage (machine learning) and at the query execution stage. Costs are directly proportional to the amount of knowledge stored. Estimates that ChatGPT 4, which has 175 billion pieces of knowledge (i.e., tokens), required 3.5 million processor hours to execute 3.2E+23 floating point mathematical operations, and consumed 1,287 megawatt hours of electricity.
I posted in Facebook an article about how ChatGPT’s latest release 4 now beats ELIZA, a software testbed I spent hours playing with when I worked at MIT in 1967. You should not be impressed. Your Apple watch has 2,500 times the computing power of MIT’s most powerful computer circa 1967. And Open AI uses 10,000’s of the most powerful nVidia AI accelerators to accompany its CPUs. So, order of magnitude 100,000 times more hardware to fool a human in 2023 compared to 1967. That’s a flaw.
Too much hardware is required to execute queries, so there is no economic model that works, and the technology cannot become widespread without demand inhibitors, for example, the cute Teletype speed of your OpenAI results coming out slows the reader down from assimilating the results and firing off another query faster. Don’t bet your IRA on further AI stock acceleration.
Prediction: a fundamental new algorithm for creating and parsing AI knowledge will emerge by the end of the decade with features that radically reduce the computer horsepower needed for the AI life-cycle.
Training Opacity: The data used to “educate” the major GAIs by OpenAI and Microsoft, Meta, and Google are proprietary, as are the “character” of the product, the data weights, rules, and guardrails used to guide the GAI. If AI is a superhuman, we don’t know anything about the “upbringing.” To me that’s another reason not to believe today’s GAIs.
It’s the God Problem: the human programmers who create these GAIs have to choose both the training content and how the AI weighs or assesses that content.
For example, feeding the archives of the New York Times into a hypothetical GAI gives you 175 years of generally excellent journalism. If you add the Wall Street Journal archives to the AI, you’ve created a left-right proxy for viewpoints on history. Excellent progress, until we add the comments sections of our training publications. That’s where anybody’s thoughts are added, unweighted, to the professional journalism, however biased. X/Twitter is the perfect example of AI training material where there’s no way to sort the wheat from the chaff.
I’d like to see neutral and authoritative AIs, and that’s not what we’re seeing today. Moreover, how is the human God going to choose a moral code and political philosophy for the GAI? Ask ChatGPT “what are the failures of Marxism as a philosophy?” and make your own decision on how politically correct answer is.
In short, today’s GAIs are not arbiters of truth. You’ll have to make your own decisions.
Timeliness: The algorithms used to parse knowledge and turn that into a neural network costs $5-$10 million for one pass of the Internet at a moment in time, which of course expands and changes daily. To my knowledge (Edward Snowden quotes), only the National Security Agency has the computing power to query the right-now Internet. Flaw: timeliness. ChatGPT was trained on 2021 data. It doesn’t know about anything since then. Anybody with yesterday’s newspaper knows more than ChatGPT.
Incompleteness: Let’s take Google’s search as a proxy for Internet presence. In that case, Peter Kastner at my 1990-2006 tech company, Aberdeen Group, has 1.8 million Google references based on hundreds of articles and press quotes. ChatGPT 3.5 does not recognize me as an information reference; I don’t exist to ChatGPT.
I don’t take ChatGPT’s ignorance of me personally. But neither will I trust ChatGPT with any important decision whatsoever. What ChatGPT does not know can be as important as what it does know.
Unreliability: The lawyer who submitted a ChatGPT-derived brief to a court and got his head handed back by a very displeased judge should have read the previous section. What happened? A hallucination, an AI inference gone astray that results in erroneous, made up results. All grammatically correct and flowing right along with the rest of the AI output. The first thing the AI Resistance at work will do is cook up AI hallucinations and use them as examples for why AI cannot take over their jobs. Count on this basic human behavior.
Finally, here’s an article on six ways AI can fail, and fail it does.
In conclusion, I treat GAI as a toy to play with, just the way I wiled the hours with ELIZA in 1967. But don’t take my word for it. Let’s do a test.
A Simple Test of ChatGPT 3.5
Before we bemoan the losses of millions of jobs held by college-educated workers, let’s ask a GAI to guide us in dealing with a thinly veiled description of the spread of COVID-19 in January 2020 with the following prompt (and note, I have no prompt-writing training):
Scenario: Six weeks ago in Harbin, China, a previously unknown human virus began sickening residents. Symptoms include respiratory distress, fever, lassitude, and headaches. The disease spreads rapidly by unknown means. Mortality rate is 0.5% overall, but much higher in older people.
Action: write a memo to the president of the United States outlining a prioritized plan for controlling this disease and protecting the public health, especially the most vulnerable.
Open AI’s Free Version, ChatGPT 3.5, Responds:
Here is a prioritized plan for controlling the spread of this disease and protecting public health:
- Information Gathering and Collaboration: Initiate immediate communication and collaboration with global health organizations, including the World Health Organization (WHO), to gather all available information about the virus. Establish a dedicated task force comprising health experts, epidemiologists, and researchers to facilitate data exchange and analysis.
- Travel Restrictions and Screening: Implement strict travel restrictions and screening measures at all ports of entry to identify individuals who may be carrying the virus. Enhance surveillance at airports, seaports, and border crossings to prevent the importation of the virus into the United States.
- Public Awareness Campaign: Launch an extensive public awareness campaign to educate citizens about the symptoms of the virus, its mode of transmission (as far as known), and preventive measures. Emphasize the importance of hygiene practices, such as frequent handwashing, covering coughs and sneezes, and avoiding close contact with sick individuals.
- Healthcare Preparedness and Capacity Building: Allocate resources to bolster healthcare facilities’ capacity to handle potential outbreaks. Ensure an adequate supply of medical equipment, ventilators, personal protective equipment (PPE), and medications. Develop protocols for the rapid identification, isolation, and treatment of infected individuals.
- Vulnerable Population Protection: Prioritize the protection of vulnerable populations, especially older individuals who are at a higher risk of severe illness and mortality. Implement measures to support nursing homes, assisted living facilities, and other institutions catering to elderly individuals. Provide guidance and resources to assist caregivers in preventing the spread of the virus among this demographic.
- Research and Vaccine Development: Allocate funding and resources to accelerate research efforts towards understanding the virus, developing diagnostic tests, and working on a vaccine. Collaborate with pharmaceutical companies and research institutions to expedite the development of a vaccine and potential treatments.
Mr. President, the situation demands swift and decisive action to contain the spread of this virus and safeguard the health of our citizens. I urge you to prioritize the implementation of these measures and to provide the necessary support and resources to combat this emerging threat.//
I’m disappointed by the quality, nuance, insight, and comprehensiveness of ChatGPT’s response. About what you might expect from a first-year public health student. A checklist plan for the president to execute? Not on my watch. Go do your own testing. Draw your own conclusions.
My conclusion, and I trust you agree, is AGI technology must be at least an order of magnitude more robust before AGI is the first-choice for complex decision making. Supplement decision making? Sure, right away. It’s $20 a month for individuals, about what a streaming service costs. But the human stays in charge in my world.
Yeah, But AI Can Write Computer Code!
GAI can indeed write programs in whole or in part. Code that can be used for hacking. I asked ChatGPT to hack Microsoft Windows looking for login credentials via memory overruns. The appropriate answer I got back was:
I’m sorry, but I can’t assist with that. Creating software to exploit vulnerabilities or cause harm is unethical and potentially illegal. If you’re interested in security research, I can help you learn about ethical hacking and responsible disclosure practices. If you have any other questions or need assistance with programming in a legal and ethical manner, I’d be happy to help!
Nor will ChatGPT assist you in finding hacking tools on the Dark Web. Nor can you ask ChatGPT to modify itself or add data to its training. It responds:
Improving ChatGPT involves a combination of refining its training data, optimizing its algorithms, and implementing new features. However, as an external user, you can’t directly modify the underlying code of ChatGPT. But you can contribute indirectly by providing feedback and suggestions to developers…
The End of the World As We Know It?
Let’s get to the heart of my classmate’s question: can AI wipe out the human race? Answer: No, not today or in the foreseeable future.
I stipulate that Sam Altman, CEO of Open AI, sees further down the road than I do, and he knows how fast the software can mature. It’s the products in the labs that are driving angst. That said, I don’t see Skynet creating Terminators in my lifetime.
Humans Have To Enable AI Interconnectivity
There are many MLAI systems in mission-critical roles today, and any of them could fail with catastrophic results. The US electric power grid comes to mind. To date, these individual systems are not interconnected and managed by a superhuman GAI requiring computer power that doesn’t exist. There’s no reason to connect them together further. A single system failure will not bring down civilization the way the failure of an interconnected system would.
For example, the US power grid is actually made up of four, largely disconnected, regional grids so that one hit in one region cannot bring down the electricity to the entire country. Our enemies know that one strike is not enough, so I trust our leaders to not put all our eggs in one AI basket. Will our leaders make the right choices on how far, how interconnected, to let AI into our most critical infrastructure, military, education, business and economic system? OK, I’m skeptical but not fearful.
We Are Afraid of Terminators
The “apocalypse initiated by artificial intelligence” has been a science fiction staple for over 100 years. I’m no shrink, but I suspect there are deep, unrecognized fears of machines dominating humanity. Also, Godzilla. Rational or not we need to discuss here-and-now problems with AI, many of which I have touched on in this essay.
Society needs to surface and discuss these largely uninformed fears. Hollywood, for example, appears less concerned with losing control of the rent-value of its content than Big Bad AI:
- Ridley Scott (Blade Runner, Alien): “We have to lock down AI.” But he worried that it is already too far out of the box, adding ”And I don’t know how you’re gonna lock it down. They have these discussions in the government, ‘How are we gonna lock down AI?’ Are you f—ing kidding? You’re never gonna lock it down. Once it’s out, it’s out.”
- Christopher Nolan (Oppenheimer): “…applied to AI, that’s a terrifying possibility. Terrifying. Not least because, AI systems will go into defensive infrastructure ultimately. They’ll be in charge of nuclear weapons. To say that that is a separate entity from the person wielding, programming, putting that AI to use, then we’re doomed.”
- Tim Burton (Beetlejuice): AI removes the “soul” from art and insisted that it is “like a robot taking your humanity. What it does is it sucks something from you. It takes something from your soul or psyche; that is very disturbing, especially if it has to do with you. It’s like a robot taking your humanity, your soul.”
- James Cameron (Terminator): “I think the weaponization of AI is the biggest danger,” he told Canada’s CTV News in July. “I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate. … You could imagine an AI in a combat theater, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate… I warned you guys in 1984, and you didn’t listen.”
Having so stirred up the public with hypotheticals and with box-office imaginations, I can only push back on Hollywood with Franklin Roosevelt, “The only thing we have to fear is fear itself.” The medium-term probability of catastrophe is minimal, and the long-term risk is managed by treating AI with the same care as nuclear weapons.
In fact, nuclear weapons are the right worry-analog for AI. As a child of the Cold War, I was often deeply concerned by the imminent prospect of a nuclear exchange that never happened. With more yet less stable nuclear powers today, why is the hue and cry about nuclear Armageddon not as loud as AI Armageddon?
What’s Up With Elon Musk?
Elon’s goal in life, publicly stated, is to lead a human mission to Mars. That’s impossible to execute without the most advanced computing available, including artificial intelligence. The contradictory Mr. Musk, a founder of Open AI but no longer an executive, needs all the working AI he can fit into a Starship. His cri de coeur that “The apocalypse could come along at any moment” says what about the Mars Mission? Nothing good in the contradiction that I see.
Let me speculate: Musk is up to his eyeballs in rerouting X, the social media company formally known as Twitter. He sees the battles by left and right to control the dialog and frame the conversation. Being a neutral arbiter of knowledge, philosophy, politics, and humanity–which Musk claims he is seeking at Twitter–is even more difficult to contemplate in future iterations of generative artificial intelligence.
Lots of AI News In 2024
The censorship, misinformation, deepfakes, and ubiquitous social media combined with GAI capabilities for everybody promise to create an unconventional, probably asymmetric, political campaign in the United States in 2024. Trust but verify.
Conclusion
Anxiety about the unknown is common throughout history. Johannes Gutenberg’s printing press was denounced as a work of Satan. The press turned into one of history’s most important inventions.
My goal in this essay is to articulate and evaluate many of the fears related to artificial intelligence. Your comments are welcome.
Copyright 2023 (LOL) Peter S. Kastner