[Updated Aug.17.2017 with additional links to articles discussing diversity and AI.]
I attend a discussion at the Nasdaq Entrepreneurial Center in San Francisco entitled: A Discussion on AI and Government Regulation. With over 100 people in attendance, the speakers included: Tom Kalil, Senior Advisor, Eric and Wendy Schmidt Group; Lisa Hawke, Director of Policy and Compliance, Everlaw; James Cham, Partner, Bloomberg Beta; and Hannah Kuchler (Moderator) of the Financial Times. I’m extremely interested in the intersection of science and technology and their impact on society, and this conversation lands squarely in that area.
It was nice to see that others share a similar perspective about Artificial Intelligence (AI), one that I’ve shared in prior posts and videos. Specifically, Tom said, “Every AI can be used for good and evil,” which is almost a quote of some I previously said. Although people often think of the Terminator movies as examples of AI running amok, the panel all agreed that this isn’t the biggest risk that we have to be worried about today. The problem is that some of the risks we have to worry about aren’t the ones that scare people.
Imagine you fill out an online application for an auto loan where the website promises to make a decision on your application within seconds. Sounds great, doesn’t it? You can buy that car on the spot in minutes! How convenient. But, how does this work? I’m willing to bet that the website doesn’t employ a staff of loan managers sitting at their computers waiting for you to click “submit”, causing them to jump into action, analyze your application, and send you an answer within seconds.
Instead of an army of people, they probably use an AI–based solution. Your application is sent to an AI tool called a scoring engine, which looks at a lot of factors to make an almost instantaneous decision. You might believe that the decision is based on the information contained in your loan application alone. But that isn’t always true. So, how does it work? In many cases we don’t know, because the companies won’t tell us their secret – or proprietary – method of making their decisions. This is problem. To explain why, let’s consider what could go wrong.
While it is illegal in the United States to use race, gender, sex, and other protected characteristics to make the lending decision, what happens when this occurs by proxy? Imagine that as the AI solution is “trained” to rate loan applications, it discovers (on its own and without telling any humans) that people who live in one zip code are more likely to repay the loan than people who live in another zip code. As a result, someone living in 94301 – Palo Alto – gets the loan, while someone living across the freeway in 94303 – East Palo Alto – doesn’t. A quick demographic search reveals that these zip codes reflect not only different incomes but different ethnic compositions. So, if a loan was approved or denied, it might be attributed more to a zip code (as a proxy for ethnicity) than to the quality of one’s application.
This leads to the natural question: Why would a company do something like this? The answer is: it may be completely unintentional. Often, these biases are based on the training data used to teach AI tools how to do their jobs. Even if a human wasn’t aware of a bias, the computer might learn it on its own and start using it. Why? Because the bias already exists as part of how humanity interacts with one another. The data simply reflects this existing bias.
While many would love to live in an unbiased, utopian society, the data suggests that we’re not there yet. And this leads to a problem. As Cathy O’Neil says in her book, Weapons of Math Destruction, such biased decisions (like the loan application example just mentioned) could have a devastating effect on people in terms of financial decisions, education choices, and jobs opportunities. This brings us to a very important, if not obvious question: Can we trust companies to do the right thing?
Some people may have already leaped to an answer: “Government should regulate it,” they cry! While others shout, “Government shouldn’t play any role. We should let capitalism play out unfettered!” Who’s right? As is often the case with societal issues, the answer might be found in the middle. Building on the remarks of one of the panelist, one objective of government will be to find the right balance of legislation that protects consumer privacy and safety, while at the same time encouraging business innovation and protecting intellectual property.
The panelists also agreed on another important point: AI solutions will be highly disruptive. We should expect to see large categories of jobs eliminated, as society moves toward a greater use of AI–based solutions. They also raised another question: What role should government play in preparing for this disruption and retraining those who will eventually be displaced?
Lastly, Lisa discussed many implications regarding privacy. Machine Learning, a specific AI technique, needs data – and lots of it! Every click you make on as you surf the web; every photo you upload to Google+, Twitter, or Facebook; every post you like, and every tweet you tweet, belongs to a company. As reported in a 2016 New York Times article, these companies can use this information to build very good, often accurate, predictions about you. But what happens if their information is wrong, or mixed up with someone else’s data? And who owns your data – those photos of your daughter? Your wedding? Your graduation?
What rights do you have to your information? What happens if some of the information is wrong and you are denied a loan or a job because of the inaccuracy? These are questions that we cannot leave to businesses to answer individually. Lisa noted, and I believe correctly, that the European Union, with the General Data Protection Regulation (GDPR), leads the US in consumer privacy. However, these issues don’t stop at the borders and neither will the solutions. And just like the EU, the US is going to have to tackle these tough questions.
As you can see, there are a lot more questions than answers. While there were not as many concrete answers as we need, I’m glad the questions are being raised. The good news is that people, organizations, and politicians are looking at these issues and having the much–needed conversations.
UPDATE
In the 12–hours since releasing this post, I found two interesting articles that tackle the same issues I’ve discussed here. The first is from the MIT Review and is entitled: “AI Programs Are Learning To Exclude Some African American Voices“. The second is by Rachel Thomas of Fast.ai, entitled: “Diversity Crisis in AI, 2017 Edition“.
––––
Steven B. Bryant is a researcher and author who investigates the innovative application and strategic implications of science and technology on society and business. He holds a Master of Science in Computer Science from the Georgia Institute of Technology where he specialized in machine learning and interactive intelligence. He also holds an MBA from the University of San Diego. He is the author of DISRUPTIVE: Rewriting the rules of physics, which is a thought–provoking book that shows where relativity fails and introduces Modern Mechanics, a unified model of motion that fundamentally changes how we view modern physics. DISRUPTIVE is available at Amazon.com, BarnesAndNoble.com, and other booksellers!
Images courtesy of Pixabay.com
Photo of Steven B. Bryant ©2015 Steven B. Bryant, Photo by Amy Slutak