Sunday, March 17, 2019

Kartik Hosanagar's "A Human's Guide to Machine Intelligence"

Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business and a Professor of Marketing at The Wharton School of the University of Pennsylvania. Kartik’s research work focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing and e-commerce.

Hosanagar applied the “Page 99 Test” to his new book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, and reported the following:
Page 99 of my book A Human’s Guide to Machine Intelligence, discusses two ways to design Artificial Intelligence (AI). Specifically, it discusses AI that can diagnose diseases. The first way is to interview medical experts and identify a set of rules that doctors use to diagnose diseases. For example, a doctor might say that if a patient has a fever for over a week, he/she might focus more on bacterial infections than viral infections. An alternative approach to build AI is to simply feed a lot of data to an algorithm and have it identify patterns in the data. An algorithm might be given data on medical test reports of over 100,000 patients along with the diagnoses human doctors had reached, and it then infers which medical markers predicted which medical conditions. The discussion goes on to clarify how AI researchers were focused on extracting rules from experts in the 80s but the resulting AI couldn’t match human intelligence. However, AI based on learning patterns from large quantities of data (without being programmed with diagnostic rules) are working incredibly well and beating humans at games like Chess but also with tasks such as medical diagnosis. But this switch from programming AI with explicit rules to AI that can teach itself from large quantities of data has many implications. For one, AI based on rules are highly predictable because they are governed by precise rules. AI that teaches itself through analysis of large volumes of data can be more unpredictable because it’s hard to know what exact patterns it might discover in the data. This is why we are seeing examples of racism in algorithms used to guide sentencing decisions in courtrooms and sexism in resume screening algorithms. No engineer is programming bias into these systems. Instead, the bias is being picked up by the algorithms by analyzing data on past decisions by humans.

The discussion helps set up some of the emerging challenges with AI-based decisions and why it’ll be non-trivial to solve. The rest of the book explores the complex interplay between humans and AI and how we will stay in control of seemingly unpredictable AI systems. In the book, I explain why we are not helpless against algorithms unleashed by powerful tech companies to make decisions for us or about us. Instead, we can take control. Further, technology companies and governments will have a role to play as well. I discuss the role of consumers, companies, and governments in the final chapter.
Visit Kartik Hosanagar's website.

--Marshal Zeringue