Machines, it turns out, think and learn in pretty much the same ways that humans do. They also possess some of the same weaknesses.
The arrival of ChatGPT has thrust artificial intelligence into the public consciousness, but AI is already deeply entrenched in our lives. So much so, says David Horton, a futurist and administrator at Radford University in western Virginia, that any thoughts of legislating it out of existence are completely unrealistic. That’s not necessarily a bad thing to Horton, who acknowledges the challenges ahead but ultimately comes down on the side of optimism. He recently shared with Clay some of the ways in which AI will impact our future, and how that future depends a great deal on the present-day establishment of rules of engagement. This interview has been edited for length and clarity.
Clay: What do we mean when we say artificial intelligence?
David Horton: Artificial intelligence is primarily centered around machine learning, where computers detect patterns in data sets and create iterative plans. Here at Radford University, we have a program that focuses on the collection, organization, and utilization of data to create useful outcomes. We’re talking about terabytes of data. An airplane flight of several hours, for example, can generate data that allows an airline to extrapolate out when tires need to be changed or when pieces of the plane need to be modified or adjusted, minimizing downtime for the plane. That’s what artificial intelligence is: taking data sets, looking for patterns, developing what can be extrapolated, and then producing something useful.
Clay: Many have come to view AI as a dangerous and even sinister thing. Yuval Harari, the author of Homo Deus, says that we should move to outlaw AI while we still can.
David Horton: Like any tool, it comes with both opportunities and challenges. One of those challenges is to determine the rules of engagement. It’s much like the atomic energy revolution of the early to mid-20th century that provided the opportunity to power entire cities, but also brought great destructive power. It depends on how it’s used. For years the science fiction trope has been that artificial intelligence and robots are a threat to humanity because we deploy human faults and traits onto those things. There are hyped stories about chatbots gone awry, but so far it hasn’t really gone that way. We’re just at the dawn of the day with artificial intelligence. In the next few years and over the next few decades, we’re going to see changes in every facet of life.
Clay: ChatGPT is currently getting a lot of media attention.
David Horton: ChatGPT is a wonderful imitative software package that scours the internet, searching through millions of posts and articles. It points to existential issues about the definition of original thought and the way in which we learn. Much of what we as individuals produce is based on what we’ve read, what we’ve experienced, what we’ve learned from others, and that’s how machine learning and artificial intelligence is modeled. So you should not be surprised if at some point you recognize some exact language from your own work in a ChatGPT-produced article.
Clay: In other words, our brains do the same type of networking, but on a much more intuitive and smaller scale.
David Horton: That’s right. Artificial intelligence hasn’t quite made that leap to original thought. It may occur with the development of quantum computing, which creates multifaceted, multi-armed relationships where things are connecting in different ways all at once. We have to realize, however, that ChatGPT productions can contain inaccurate statements. You have to be knowledgeable enough to evaluate what you’re getting from an artificial intelligence. For example, I serve as mayor of the city of Radford, so I asked ChatGPT to write a press release about my tenure in that office. It did a pretty good job through a couple of paragraphs, but then it went off the rails and included things with which I never had any involvement. But that same exact thing can be said of items created by people. We are fed information all day from a variety of sources, and we process that information to discern what we think is the truth. Artificial intelligence works the same way.
Clay: There’s been a lot of talk about the implications of all of this for higher education.
David Horton: Just as there are tools for creating documents, there are tools for detecting artificial intelligence as the source of documents. This is an evolving process, but it doesn’t exist in a vacuum. When a student submits a piece of work, you know things about that individual and the way in which they communicate. You know things about students that don’t necessarily match up to the end product. That’s one big piece of this. Another is the way in which we make those assignments. If a topic is truly universal enough to lend itself to this sort of writing, you might instruct students to write something “based on what we’ve discussed in class,” or “based on your experiences.” Require them to explain their reasoning. It becomes a very different kind of essay. It’s not impossible for that to be replicated by an artificial intelligence, but it integrates the person and the experience more into the process.
Clay: One area where it seems that AI could have a positive and even revolutionary effect would be healthcare delivery.
David Horton: Without a doubt. Time is everything in diagnosis, and AI can look at patterns and detect the propensity for a specific cancer 20 or 30 years prior to actual diagnosis. With just a small treatment, you’d be able to totally avoid what would ultimately have been a cancer diagnosis. And with chemotherapy, imagine drug delivery systems smart enough to analyze cells and know exactly where to deploy the medicine. We’ll be able to take an almost molecular-sized robot called a nanobot, fill it with an atomic medicine that is the cure for a particular cancer, and send it in to destroy those cells. Chemotherapy now affects both healthy and cancerous cells, but these nanobots will only deploy the medicine when they recognize the cancerous cells.
Clay: The manipulation of photography has been around for a long time, but it seems that AI could take that to new levels?
David Horton: We’ll see exponential growth with this. There’s already the ability to create an amalgam of yourself based on pictures. You upload 10 or 20 pictures of yourself, and AI can create images that look like photographs of you in different settings. You can be a superhero, a warrior, whatever you want. There are new things coming on the market every day that allow for the original generation of images, voices, music, scripts, etc. We will likely see a fully AI-generated movie within the next five to ten years. We could have Clark Gable and Arnold Schwarzenegger and Marilyn Monroe appearing in this film, with voice and image completely and originally generated.
Clay: A number of years ago there was a machine-written symphony that was performed for a live audience that loved it until they were told that it was machine-generated. They became very angry when they were made aware of how the music was created, arguing that it offended the notion of the creative spirit.
David Horton: That symphony was created by feeding the computer thousands of symphonies and letting it process the patterns of pieces that were considered to be great works. One part of the perceived offense is that you no longer require a human to craft and create the work. It’s no longer someone’s story, someone’s experience. The other problem goes back to using the works of other artists to generate this new thing. Are there credits or permissions involved? Also, when a machine creates something, listeners have an easier time criticizing it because they perceive it as the output of a coldhearted machine. But we will learn to live with that as we move forward. We have to realize that people today are reading and watching and listening to things generated by AI, and they have no idea of the degree of artificial intelligence that was involved.
Clay: What about intellectual property issues? Consider an artist like Prince. Though he died in 2016, machines will eventually be capable of writing hundreds of new “Prince” songs. They wouldn’t be his, but they would sound as if they were.
David Horton: That’s happening now, and it does present all kinds of challenges. The dark side is that AI can be used to steal intellectual property and utilize it in ways that artists never intended. It can be used to present misinformation. It can be used in all kinds of nefarious ways. The United Nations is beginning to address this on a global level. There’s a website – AI 4 Good — that will take you to the UN-sponsored AI for Good Global Summit taking place in July. They’re essentially trying to create the rules for engagement. It’s a struggle because the genie is already out of the bottle on so much of this, but we have to look at how we can use this to better the world without allowing the destructive elements to take control.
Clay: As individuals we learn to use common sense and to give proper weight to the evidence in order to make reliable evaluations of the material we encounter. Are the machines going to become sophisticated enough to do that sort of weighting?
David Horton: The machine will be always be a tool that some will use to push an agenda. At one time, Walter Cronkite was the most trusted man in America. If Walter said it, it must be true. That sort of trust is going to become much more valuable as we move forward. The individuals who bring us the news will matter that much more. We will be relying on their judgment and their trustworthiness to help us in this process. Critical thinking is going to become even more crucial. When you’re a journalist, an educator, an historian, an author, any kind of expert, you’ll have to multi-source things and evaluate each piece. Those sorts of individuals are going to become that much more important in this world where so much information can appear to be factual.
Clay: Can this be regulated?
David Horton: They are going to try, but as I said, the genie is already out of the bottle. People will have the ability to use AI regardless of what we do through law or treaty. Still, we have to do our best. I would not be surprised to see us turn to an artificial intelligence to help us figure out how to deal with artificial intelligence. There’s no single solution. It will be interesting to see the outcomes of the UN global summit and what the rules of engagement begin to look like. It will be impossible to make AI go away. It has already existed far longer in reality than it has in the public consciousness. It’s just now becoming apparent to us, but it has many layers and has been going on for many years. I don’t automatically see it as a doomsday scenario, but it’s a challenge we’ve got to face.
You can listen to the full episode here: #1554 Artificial Intelligence and the Future of Civilization.