Temple Grandin: A.I. and ChatGPT are great, but we still need experts.

News

HomeHome / News / Temple Grandin: A.I. and ChatGPT are great, but we still need experts.

May 03, 2023

Temple Grandin: A.I. and ChatGPT are great, but we still need experts.

I first become aware of A.I. in 1968, when I saw a movie that affected me

I first become aware of A.I. in 1968, when I saw a movie that affected me deeply, 2001: A Space Odyssey, by the director Stanley Kubrick. I was 21 years old. I loved science-fiction movies, but this one had a special significance.

As a person with autism, I’m more rational and fact-based than emotional and feeling-based, and my speech has been described as monotone or unmodulated. In high school, some of the kids called me "robot" and "tape recorder." That's part of why I related to HAL, the sentient computer who, with his steady voice and hyper-logic, helps the astronauts with their mission (until he doesn't). But I didn't give A.I. all that much attention until 1997, when the IBM computer Deep Blue beat world chess champion Garry Kasparov. That was a game changer. Then, in 2003, I was blown away when two Mars rovers were sent to spend their days navigating the surface of the planet on their own, without human controls.

Since then, A.I. has found its way into our everyday lives, powering Siri, self-driving cars, pop-up ads on social media, algorithms for personal investments, and much more. Most recently, the tech company OpenAI introduced ChatGPT, giving free access to the remarkable technology. Already, ChatGPT has aced the bar exam.

I wanted to know what expertise ChatGPT could summon in my line of work. I asked it to write a short paragraph on the behavioral principles of cattle handling. It did an accurate job of summarizing one of my papers, demonstrating that mathematical analyses are only as good as the data that is put into them. Then, using wording I do not normally use, so that it could not merely look up my own work, I asked it to write a short paragraph on good cattle-handling methods. It wrote an accurate, concise paragraph about the behavioral principles involved. For both paragraphs, it provided references that were real and relevant. However, in one case it included an old diagram of a really bad design of a facility—a design that a newcomer to the industry would not know was seriously flawed. For any field that uses A.I., we will still need experts.

So far, most of the concern about A.I. has focused on whether machines will take away people's jobs, whether A.I. will make the human race redundant. I believe that this worry is misplaced. Every technological revolution eliminates some jobs and creates others. The real dangers, I believe, lie elsewhere.

Because I am not only autistic but a highly visual thinker, meaning comes to me first in images. My mind is like a simulator that runs the equivalent of videos, showing how things happen, especially how they may go wrong. It's the antithesis of the highly abstract reasoning that led to the radical breakthroughs in A.I. But it's visual thinkers like me who are specially equipped to make sure that this progress doesn't come at too high a toll.

To most people, A.I. is an abstraction, something akin to magic. That's dangerous. We would do better to think about it as we do with physical infrastructure, like roads or processing plants, with their vulnerabilities and real-world consequences. A.I. helps our world run—but any large and powerful utility can go awry. Over my 50 years working in industry, I have seen firsthand the chain reactions that are set off by human failure, and the vulnerability of our essential infrastructures, such as water and electric power systems. These systems are now under further threat if, for instance, hackers take control of the computers that control the equipment—an ever more real danger now that A.I. programs can write computer code. The best way to protect our water and electricity grids is to use old-fashioned electromechanical controls that will shut off vulnerable equipment if it gets too hot, spins too fast, or builds up excessive pressure.

Heather Tal Murphy

I’ve Seen the Future That Has Hollywood Actors Terrified

Read More

I have had fun playing around with ChatGPT. In addition to asking it about my field, I sought its thoughts on purple flying zebras. (It determined that such creatures are fictional and feed on clouds and rainbows.) But I also share the alarm expressed by Sam Altman, the head of ChatGPT's creator OpenAI, when he recently told the Senate, "If this technology goes wrong, it can go quite wrong." Showing the legislators how ChatGPT could concoct an entirely convincing senator's speech, Altman told them that A.I. should be blocked from self-replicating or escaping into the wild. An A.I. program that made a clone of itself could hack its way into a data center, doing unimaginable damage if human operators are not alert enough to intercede. If we are going to benefit from A.I., especially now that the tools to use it in unique ways are widely available, we not only need to protect it as crucial infrastructure, but we must also train everyday citizens in basic A.I. self-defense.

Anyone in an area that requires relying on scientific facts must know how to search scientific databases to verify claims themselves. For nonscientific but controversial subjects, people need to learn to check claims against multiple credible sources of information. Good software for detecting fake photos—which can drastically amplify fake news—is available; we have to educate people on how to use it. And people with the skill of visual problem-solving, who can simulate unforeseen consequences in their mind's eye and envision solutions in real time, must be put to good use. Danger is not an abstraction. We need people who live where I live: in the world of practical things. Whether it's water pumps or power grids in the outside world or basic plumbing and electrical problems in our homes, people need to know how to fix things. I am concerned that some kids can now code in middle school but have never picked up a hammer or threaded a needle. If they become policymakers, it will be very difficult for them to make effective policies about things like infrastructure that depend on hands-on knowledge and abilities.

Can we learn everything we need to protect ourselves faster than the rate of A.I. advances? The consequences of the technology are staggering. Researchers are developing methods for using A.I. to understand and manipulate molecular biological processes. What if somebody created a new dangerous virus? The ultimate danger A.I. could do, according to reporting on Altman's remarks, is "manipulate humans into ceding control." What if it tricked someone into launching multiple missiles with atomic warheads by faking radar reports of incoming weapons? Will we have software able to detect the manipulation, or satellite technology able to detect the heat that would prove or disprove the radar reports? Will we have people trained to use this tech, and the requisite communications network, to make sure no one misses a step?

These questions are deeply complex—but by considering concrete fixes, we have more control than we think. Maybe we will need to go back to the old-fashioned "red phone" landline that used to connect the presidents of nuclear powers. A red phone routed through non-computerized electromechanical equipment. Watching 2001: A Space Odyssey, I cried when David had to turn off HAL after it had killed the other astronauts. HAL had an off switch. So must A.I.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.