In the twenty-first century, technology radically affects the lives of everyone on the planet. Frank Pasquale, a leading commentator on artificial intelligence (AI) law and policy, has been thinking about these effects for decades. From his 2002 article “Two Concepts of Immortality” to his latest book New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020), Pasquale has explored the myriad ways that technological advances affect how we work, what media we consume, how law is made and enforced, and much more. He brings a refreshingly philosophical, even spiritual, perspective to these discussions, while concretely addressing the problems that arise when robots advance into hospitals, schools, and militaries.
A professor at Brooklyn Law School, Pasquale has taught topics ranging from intellectual property to health-care finance and regulation. Pasquale’s book The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) has been recognized internationally as a landmark work of law and political economy, and has been translated into several languages. The journal Big Data and Society recently published a symposium on it. Pasquale’s contributions are not only scholarly, however. He has also testified before House and Senate committees on algorithms and big data, and currently chairs the Subcommittee on Privacy, Confidentiality, and Security, part of the National Committee on Vital and Health Statistics. The following interview was conducted by email and edited for clarity and length.
LAWRENCE JOSEPH: New Laws of Robotics: Defending Human Expertise in the Age of AI begins where your monumental 2015 book The Black Box Society: The Secret Algorithms That Control Money and Information left off. The two books capture our twenty-first century of techno-capital, law, and political economy with unmatched breadth, depth, ambition, and vision. Could you describe the connections you see between them?
FRANK PASQUALE: Black Box Society was fueled by a simple question: How did they get away with it? “They” here refers to the massive finance and technology companies, now at the commanding heights of the global economy, that have done so much damage with so little accountability. I did not think the book would have much of an audience beyond lawyers. But I was fortunate, and readers in business, engineering, media, and many other fields told me how much it resonated with them.
Through these conversations, I became convinced that we needed more than a critique of tech and finance to build a better world. We needed a compelling story about the kind of future we want. That was my blueprint for New Laws of Robotics: to explore positive narratives of innovation in fields such as health care and education, where AI and robotics are enhancing human jobs and interactions.
But there are so many troubling uses of AI and robotics out there that the critical spirit of Black Box Society remains. The stakes have gotten higher. People are being hired and fired based on AI analysis of their voice or social media. We keep seeing AI (and even robots) that try to pass off mechanistic mimicry for real human judgment and emotion. Deceptive simulation is a major concern of New Laws of Robotics.
LJ: Why is AI’s simulation of humans so worrisome?
FP: Let’s start with an example we can all relate to—a classroom. Some robots can be great for kids. For example, a “Dragonbot” incorporates a cell-phone screen and sound in a plush dragon toy. In some ways, it’s like a talking doll, but it’s connected to databases so it can say much more. To the extent the child understands the Dragonbot as a toy or a tool, they’re developing an accurate understanding of technology, and the distinction between things and people. You can stuff the Dragonbot in your closet for a few days, perhaps never use it again, no problem. You can take out its “brain” (the cell phone that animates it), and that’s fine, too.
Now think about a child growing up with a mechanical peer or a mechanical teacher in her classroom. What’s strange and alienating there is that these roles have always been occupied by people with needs, goals, fears, and hopes rooted in a common physiological experience—that of having a human body and mind. There is a fundamental equality among them, a common dignity grounded in our common fragility. The Dragonbot isn’t hurt if you ignore it. Your classmate might be, and part of learning to be human is developing a corresponding sense of care and regard.
Now, a child could try to develop that same sense of empathy for a robot, such as the humanoid “Adam” described in Ian McEwan’s novel Machines Like Me. But that seems misdirected. Whatever the robot “feels” (or, to be more accurate, however it simulates feeling) is a product of its programming. It could be programmed otherwise. By contrast, we humans can’t be “programmed” not to care about being neglected.
There are other dimensions of fakery that also concern me. “Deepfakes” are AI-generated video, text, and images that simulate events, documents, and people. Even when they are debunked, they can cause confusion at critical moments. The images of faces churned out by sites like “This Person Does Not Exist” could be used as profile pictures for armies of social-media accounts—which in turn can fake groundswells of enthusiasm or rage, especially when coupled with AI speech-generating models like GPT-3. They can also swamp hashtags on Twitter, making it hard for people talking about a topic like #BlackLivesMatter to communicate collectively. My book proposes limits on deceptive mimicry, including bans on AI and robotics that counterfeit certain human qualities. Ideally, the “new laws of robotics” developed in the book would guide policy making generally.
Please email comments to [email protected] and join the conversation on our Facebook page.