Ethical AI requires Ethical Business Practices

Earlier this week, I attended an Ethics in AI (Artificial Intelligence) discussion with a group of social advocacy and technology enthusiasts. This post was originally intended to summarize issues the technology industry must overcome as it embraces and deploys services and products built using AI. However, Ethical AI cannot be adequately addressed if we don’t first address a far more insidious problem.

On December 20, 2017, ProPublica and the New York Times published an article highlighting how “Dozens of companies are using Facebook to exclude older workers from job ads.” Imagine yourself as a female executive with several years of experience and an MBA from a top university. How might you feel when your friend, who has less experience and lacks a graduate degree shares the number of job opportunities he sees delivered in his Facebook feed, ads that you will never see simply because you’re female. Instead, you see ads seeking qualified executive assistants – ads that your male friend never sees. Extending this example further, it’s not hard to imagine examples where jobs postings specifically exclude people of certain political affiliations, sexual orientations, ages (as discussed in the article linked above), races, ethnicities, or religions.

I’ve been in the workforce since the mid 80’s and, fortunately, have rarely felt overtly discriminated against. Yes, there is a somewhat recent event where a coworker made a snarky comment about someone’s clearly African American name, or the time when a client would not acknowledge me in a meeting, or the lunchtime conversation where a client asked if I knew any gang members. While I did not raise this as a concern to my manager or partner, several months later at an Accenture event, a co-worker who was present at the lunch shared this incident with the engagement partner. The partner, clearly concerned about what she had just heard, not only talked to me that evening but I was asked to visit her office several days later where she said that the client’s behavior was unacceptable and that the firm would end that client relationship. After discussing it with me, I made my case – that the client was insensitive, inappropriate, and inept, but not racist – and that we should continue our client relationship.

Some readers who have not experienced discrimination may not understand that my experiences are mild compared to what others experience. (My parents were denied housing and explicitly told that the sole reason was their race.) I share my more recent experiences for several reasons. First, it’s an example of what happens when a company has the courage to do the right thing – without being asked to do so. Second, it shows that discrimination comes in all forms and is not simply a historical issue, but one that requires addressing as we enter 2018.

Facebook, in their own defense, acknowledges that discrimination is illegal. Yet, they argue that “used responsibly, age-based targeting for employment purposes is an accepted industry practice”. It’s not hard to imagine their next reply using this template: “Used responsibly, [insert discriminated against group here]-based targeting for [housing, extending credit, pricing insurance, etc] is an acceptable industry practice.” Rather than acknowledge the problem, Facebook argues that skirting the spirit of the law and ignoring a good societal moral compass is okay as long as everyone is else doing it. Queue mom’s voice: “Just because Johnny jumps off a cliff, does that mean you’re going to jump too?

How can Facebook, or any company, in good conscience say they are helping people of all ages find work when they are expressly excluding people from seeing those ads? This behavior is not only an insult to our intelligence, but an insult to our forefathers and foremothers who fought for equal rights for women, underrepresented and historically discriminated minorities, veterans, and older workers. Facebook has an opportunity to be part of the solution, to demonstrate the type of leadership Accenture demonstrated without being asked. The window is still open, there is time for them to demonstrate courageous leadership.

Discussing Ethical AI, specifically biases in a machine learning model, is premature if companies don’t recognize bias as a problem, or worse, if they outright defend the practices based on the idea that biases are simply good business practice.

I encourage companies to reflect on their role, not simply in American culture but as members of a worldwide community. This is an opportunity to truly demonstrate what courageous leadership means. Despite what you might read, courageous leadership is not about removing a headphone jack from a smartphone. Courageous leadership is about doing the right thing even when everyone else is doing the easy thing. Companies can learn from Accenture that taking responsibility and doing the right thing benefits the company, its employees, its shareholders, and its customers.

––––

Steve_aes-114Steven B. Bryant is a researcher and author who investigates the innovative application and strategic implications of science and technology on society and business. He holds a Master of Science in Computer Science from the Georgia Institute of Technology where he specialized in machine learning and interactive intelligence. He also holds an MBA from the University of San Diego. He is the author of DISRUPTIVE: Rewriting the rules of physics, which is a thought–provoking book that shows where relativity fails and introduces Modern Mechanics, a unified model of motion that fundamentally changes how we view modern physics. DISRUPTIVE is available at Amazon.com, BarnesAndNoble.com, and other booksellers!

Images courtesy of Pixabay.