Westmont Magazine Pursuing Truth in a Computer-Mediated World
By don patterson, professor of computer science
Teaching computer science is an interesting enterprise these days.
In my iOS development class last year, I was giving a lecture at the same time Apple was announcing a product release. Students in the class were stealthily following their real-time news feed and interrupted me with some critical news: The software updates seemed to invalidate everything I was teaching. I had to concede the landscape had just shifted, so I told them, “Class is over. I’ll tell you what happened tomorrow.” It’s just the nature of the business.
In your daily life, you are likely experiencing changes like these from major companies involved in technology, although maybe not with such sudden and immediate effects. Business circles refer to the major companies with the somewhat threatening acronym FAANG (Facebook, Amazon, Apple, Netflix, Google), although others are poised to influence our lives as well. Curiously, they are all making big bets on artificial intelligence (AI). Sometimes this is flashy. For example, Amazon is testing an AI-powered convenience store with no cash registers. You walk in, and digital eyes carefully monitor your movements as you sheepishly put your purchases in a bag. Sensors feel the change in the shelves, and you are charged as you walk out. A wide variety of companies are writing much more byzantine software that will fuel the AI-enabled future.
What has caused this industry-wide explosion in artificial intelligence? Three things have created a tipping point: algorithmic advancements now accessible to software professionals; advances in “graphics” cards; and the dawn of the age of Big Data.
Some of the fundamental technologies that produce photorealistic graphics on your computer (typically for games) turn out to be just as effective at powering AI algorithms. So NVidia, a company that came of age manufacturing “graphics” cards is undertaking a major pivot. They are reworking existing technology to provide the brains for robotics, self-driving cars and artificial intelligence of all kinds.
Of these three factors, the abundance of data has made the biggest impact. Since 1995, the amount of data sitting on hard drives and data centers around the world has grown exponentially to a really enormous amount. While that may seem threatening, as long as the data is decentralized, concerns about surveillance and privacy are lessened. One company/ organization can’t see it all, and if it’s collected for multiple purposes, it’s harder to fuse.
Consider some of the data Google collects. When you type a request into the Google search bar, artificial intelligence gives you a bunch of predictions. Google guesses what you’re going to keep typing, but this technology isn’t sophisticated. Google simply counts what other people have typed after starting the same phrase and gives you the most popular completion. You benefit from a huge amount of search data, not a deeply brilliant predictive algorithm. Data has been more transformative than the algorithms or the hardware because it has enabled us to see much more of what’s happening in the world. We don’t need to give computers complicated interpretive ability to reason about how people think or work to fill in the data gaps, because there are far fewer such gaps. They can just watch and count what is most likely to happen.
For people, this surplus of data hasn’t been quite so comforting. For centuries, people have complained that each new milestone in delivering information (e.g., newspapers, telegraphs, telephones) has felt overwhelming. It’s a common historical experience. In 1971, Herb Simon wrote insightfully in “Designing Organizations for an Information-Rich World,” “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”
That feels about right. While the huge amount of data incapacitates human attention, artificial intelligence is thriving. We are struggling as a society to deal with these two competing forces.
In response, technologists try to create AI tools to help us manage this information overload, which turns out to be a two-edged sword. Look at some of the ways AI affects the information flowing between people.
A lot of human interpersonal communication these days is computer-mediated. Back in the day, you might have a neighborly conversation face-to-face on the porch with a friend. Eventually this gave way to technology that allowed us to talk by phone, with human operators manually connecting the calls and mediating your phone conversations. But you communicated via a wire directly connecting your phone to someone else’s.
Eventually, electronic circuitry replaced the human operators, and the individual wires that carried phone calls began sharing multiple simultaneous conversations. Today, the intermediate wires also share the delivery of YouTube videos, Instagram photos and bank transactions because many calls travel over the public internet. We might have started with an unmediated conversation, but now we have computers in the middle of every call, directing the bits, sculpting the sound, and storing the messages if we don’t answer. With a computer in the middle, artificial intelligence can also become part of this computer mediation.
A phone call demonstrated at the Google IO conference last year provided an example of artificial intelligence making a phone call on behalf of a user, not just moving the bits, but actually speaking and responding to set up a haircut appointment (youtu.be/7gh6_U7Nfjs).
Text conversation has a similar trajectory, from hand-carried letters to the Pony Express to telegrams and eventually computer messaging. The first such computer connection via the primitive internet occurred between a UCLA student and the Stanford Research Institute in 1969, with the message: “lo.” The word was supposed to be “login,” but the computer crashed after the first two letters.
The network that connects devices on the internet uses a suite of communication protocols called TCP/IP (Transmission Control Protocol/Internet Protocol) implemented by computers that mediate our emails and text messages. So this communication is accessible to artificial intelligence.
Once again, Google sees an opportunity. They developed Google Inbox, an email app that allowed users to offer rapid responses authored by artificial intelligence. While the app has been discontinued, the AI responses were so popular they became part of Gmail. If I receive an email from a colleague telling me about interesting research he has compiled, Gmail offers responses drawn from my language and speech patterns. One set of selections included: “Thanks, I’ll take a look.”; “This is great, thank you!”; “Cool, I’ll check it out.” Not the meatiest replies, but better than generic suggestions such as “Yes,” “No” and “I don’t know.”
The third kind of technological mediation is video calls, which also started with unmediated face-to-face communication—i.e., conversations. Eventually we developed television. Now we engage in multi-party video conversation over Skype, Slack, FaceTime or other computer-mediated technologies.
With Apple’s Animoji iMessage app, you can make and share cute animated creatures instead of your face. It uses motion-capture to track your facial movements and place them on an animated character, which communicates instead of you. This is a more playful or entertaining use of artificial intelligence to make interpersonal communication better—or at least more fun.
What is happening in these three examples? Clearly, artificial intelligence is inserting itself into the content of the communication. But something else is occurring. We can answer emails and texts much faster and make appointments more efficiently. Bringing AI into our world is speeding ourselves up and scaling our impact up. AI is amplifying who we already are or who we are attempting to be. If I’m a toxic person, the AI will reflect my personality. Likewise, if I’m a kind human being, it will duplicate my kindness. Person by person, AI is speeding up, scaling up and amplifying our whole culture.
Technology rarely offers a simple fix for our problems, especially as people make it and use it. Instead of magically making all things better, it introduces new challenges. For example, human-switched phone conversations made it possible to eavesdrop on calls. My mom used to be a telephone operator, and she tells stories about her coworkers listening in on conversations between the Kennedys. Or, later in her life, she would take the phone off the hook to stymie the regular 6 p.m. telemarketers. While an automatic appointment agent sounds magical, I also receive those calls, which come from computer-generated Chinese voices selling me website services. My students fluent in Chinese translate them. Increasingly I respond by taking “the phone off the hook” and not answering numbers I don’t recognize.
We all know about junk mail in our mailbox, which has evolved to spam in our inbox. Westmont’s IT department constantly battles spear fishing, a type of personalized email attack that tries to trick you into clicking a link or downloading a document that will expose your credentials or give the bad guy access to your materials. The hacker includes personalized information and masquerades as someone or something you know and trust. A fake President Beebe emails me with emergency requests from time to time. I usually fall for it until he asks me to send him an Amazon gift card for one of the trustees.
Of course, that’s still a human imposter. Contrast that with a chatbot, an AI computer program that engages in text conversations with people. Politibots have emerged in recent elections. For example, I’ve received text messages from “Alex” representing Proposition 68 and urging me to vote yes for “clean drinking water and natural resources that prepare California for drought and wildfires.” Of course, Alex is artificial intelligence and not a real person. I consider this an example of AI in a computer mediation going badly despite my sympathies for the message.
Some of our newest challenges will occur in the video domain. Perhaps you’ve seen examples in the news of artificial intelligence creating fake celebrity images. They aren’t real people even though they might look familiar (thispersondoesnotexist.com). Likewise, there are video examples of President Obama speaking with his voice but four completely different video renderings, none of which are real (bbc.co.uk/ news/technology-40598465). More troubling are videos with fake audio and fake video synthesized so accurately it is difficult to detect. This is disturbing because video is one of our most powerful sources of evidence for what is true. The phenomenon of fake news is about to be amplified.
For some reason, we have less trouble lying to computers or lying to people through computers than directly to each other. In face-to-face communication with increasing bandwidth and social cues and decreasing computer mediation, lying seems to get harder. Likewise, it’s much easier to generate fake written news than fake audio reports and even harder to develop a fake video or fake an observed event. As a rule of thumb, the more mediated our communication channels are—and the smaller the bandwidth—the more prone they are to deception.
The struggle to determine which accounts are true is an ancient one. As he is on trial before being crucified, Jesus responds to Pilate, a Roman governor of a chaotic and rebellious area on the outskirts of the empire who is no stranger to deception. Jesus says, “You say that I am a king. For this purpose I was born, and for this purpose I have come into the world—to bear witness to the truth. Everyone who is of the truth listens to my voice.” Pilate said to him, “What is truth?” (ESV John 18:37).
Is Pilate being sarcastic or deeply philosophical? Either way, he demonstrates recognition of the real struggle to understand and convince others of what is true. Remarkably, he seems quite adept at navigating the ambiguities, if not the politics, and pronounces Jesus innocent.
Famously, one of his disciples asked Jesus, “Lord we don’t know where we are going, so how can we know the way?” He says, “I am the way, the truth, and the life.” In that moment he mysteriously equates objective reality with something strangely different, a relationship. Perhaps we can say that to the degree we pursue the truth, we pursue Christ himself.
What does pursuing truth look like to a computer scientist at Westmont? Some of my students and I are working on something new, a technology called Witness. This blockchain-based media verification service seeks to recognize fake media. Each piece of media we catalog is registered in a growing list of records, called blocks, linked together using cryptography with a timestamp and transaction data. It’s designed to conclusively identify when media was first encountered on the internet.
Consider an actual pair of recent social media images of Emma Gonzales, a high school student who became a prominent advocate for gun control after 34 of her peers were shot or injured in a school shooting in Parkland, Florida. She was photographed for a magazine cover. In one image, she is tearing up the Constitution; in the other she is ripping a gun target. Only one is real. What is she trying to communicate? The Witness browser plug-in will automatically examine these images and look up the first time they appeared on the internet. By examining the time stamps (and possibly the website that hosted them) you can correctly conclude that the image with the Constitution is fake because it occurred after the one with the target. That hardly solves the gun-control policy question, but it gives us solid information.
At the heart of Witness technology, a fingerprint called a hashcode uniquely identifies any digital file. Putting each hashcode into a blockchain is like putting cargo into a long train. To determine which media came first, we find their hashcodes in the block chain and see which train car was first. Critically, this process can be mathematically and cryptographically guaranteed, which becomes the foundation for a socio-technical system that can be incorporated into a user interface.
Computers and truth have become deeply intertwined, and the ancient struggle concerning what is true will continue— and be greatly amplified. Fake news is only going to increase. But pursuing truth is critical to finding peace both internally and as a culture. So we must embrace this difficult challenge of seeking truth and find a way forward using all the intelligence at our disposal, artificial or otherwise.