In the view of many scientists, Artificial Intelligence (AI) isn’t living up to the hype of its proponents. We don’t yet have safe driverless cars—and we’re not likely to in the near future. Nor are robots about to take on all our domestic drudgery so that we can devote more time to leisure. On the brighter side, robots are also not about to take over the world and turn humans into slaves the way they do in the movies.
Nevertheless, there is real cause for concern about the impact AI is already having on us. As Gary Marcus and Ernest Davis write in their book, Rebooting AI: Building Artificial Intelligence We Can Trust, “the AI we have now simply can’t be trusted.” In their view, the more authority we prematurely turn over to current machine systems, the more worried we should be. “Some glitches are mild, like an Alexa that randomly giggles (or wakes you in the middle of the night, as happened to one of us), or an iPhone that autocorrects what was meant as ‘Happy Birthday, dear Theodore’ into ‘Happy Birthday, dead Theodore,’” they write. “But others—like algorithms that promote fake news or bias against job applicants—can be serious problems.”
Marcus and Davis cite a report by the AI Now Institute detailing AI problems in many different domains, including Medicaid-eligibility determination, jail-term sentencing, and teacher evaluations:
Flash crashes on Wall Street have caused temporary stock market drops, and there have been frightening privacy invasions (like the time an Alexa recorded a conversation and inadvertently sent it to a random person on the owner’s contact list); and multiple automobile crashes, some fatal. We wouldn’t be surprised to see a major AI-driven malfunction in an electrical grid. If this occurs in the heat of summer or the dead of winter, a large number of people could die.
The computer scientist Jaron Lanier has cited the darker aspects of AI as it has been exploited by social-media giants like Facebook and Google, where he used to work. In Lanier’s view, AI-driven social-media platforms promote factionalism and division among users, as starkly demonstrated in the 2016 and 2020 elections, when Russian hackers created fake social-media accounts to drive American voters toward Donald Trump. As Lanier writes in his book, Ten Arguments for Deleting Your Social Media Accounts Right Now, AI-driven social media are designed to commandeer the user’s attention and invade her privacy, to overwhelm her with content that has not been fact-checked or vetted. In fact, Lanier concludes, it is designed to “turn people into assholes.”
As Brooklyn College professor of law and Commonweal contributor Frank Pasquale points out in his book, The Black Box Society: The Secret Algorithms That Control Money and Information, the loss of individual privacy is also alarming. And while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods,” and gag rules, the lives of ordinary consumers are increasingly open books to them. “Everything we do online is recorded,” Pasquale writes:
The only questions left are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t itself the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.
Meanwhile, as Lanier notes, these big tech companies are publicly committed to an extravagant AI “race” that they often prioritize above all else. Lanier thinks this race is insane. “We forget that AI is a story we computer scientists made up to help us get funding once upon a time, back when we depended on grants from government agencies. It was pragmatic theater. But now AI has become a fiction that has overtaken its authors.”
Please email comments to [email protected] and join the conversation on our Facebook page.