Artificial Intelligence Is Still a Double-Edged Sword: How Algorithms Supercharge Inequality

Sneha Revanur
5 min readDec 25, 2019

--

Image credit, Unbabel.

Prominent software developer David Heinemeier Hansson reignited a debate surrounding bias in the age of artificial intelligence when he pointed out earlier this year that despite his wife having a higher credit score, an algorithm overseeing the Apple Card, issued by Goldman Sachs, had inexplicably assigned him a credit limit twenty times higher.

“My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does,” he wrote on Twitter in a scathing rebuke of the technology. Apple co-founder Steve Wozniak responded citing his own credit limit decision under the algorithm, which was ten times higher than that of his wife. The New York State Department of Financial Services has addressed mounting backlash by opening an investigation into the criteria that underlie the program, seeking to identify holes in which gender bias could creep in and warp its outputs.

Stemming from a new wave of what researcher Meredith Broussard calls “technochauvinism,” or the virulent assumption that technology is the solution to every problem, artificial intelligence has systematically been wielded as an agent of discrimination. Algorithms are trained to make independent decisions by analyzing countless instances of real-world data in search of patterns that explain certain results; this process forces them to unconsciously digest the bias that has historically victimized marginalized groups. When predictive policing tools like PredPol are informed by decades of crime data extracted from a justice system fueled by racism, they unfairly target minority neighborhoods irrespective of true crime rate. When Google Photos’ facial recognition software is fed primarily light-skinned faces as input, it makes blunders like tagging African-Americans in pictures as gorillas. And when the Apple Card algorithm’s determination of creditworthiness is guided by data that reflects existing prejudice in consumer finance, it learns to discriminate against female applicants.

All of these examples raise a concern that could have a frighteningly wide-reaching impact on our lives — could artificial intelligence, which is newly touted as something of a silver bullet to our world’s most pressing problems, do more harm than good? When algorithms leave no aspect of our lives untouched and can be used both to revolutionize cancer diagnosis and to surveil Uighur Muslims in concentration camps, what lies in store for the future of automation? And how do we strike a regulatory balance that encourages innovation without perpetuating computerized inequality or rendering us complicit in programming our own doom?

The path forward isn’t clear-cut. According to MIT’s Technology Review, when Amazon reformed its now-scrapped recruiting engine to disregard explicitly gendered words like “women’s,” the system still caught onto implicitly gendered verbs like “executed” and “captured,” which appear far more frequently in male applicants’ resumes than those of their female counterparts. And since algorithms are designed to maximize performance, not fairness, we can lose sight of their greater human impact and their potential to be deployed oppressively in the quest to create the most accurate or most effective model. It’s also too difficult to frame concepts as indefinite as “bias” in a mathematical context, resulting in an array of contrasting quantitative interpretations. While the racist PredPol model was classified as fair under a metric known as predictive parity, an alternate metric identified glaring discrepancies. Without universally accepted standards, we have no way to measure such abstractions — on the other hand, a single standard may be inappropriate for different uses of the same technology, further complicating the process of bias detection.

That’s not to say that there aren’t a wide range of measures we can adopt to keep machine bias in check — developing a just algorithm isn’t an entirely Sisyphean task. An issue of this importance demands action in the form of policy; Senators Booker (D-NJ) and Wyden (D-OR), alongside Rep. Clarke (D-NY), introduced a congressional bill christened the Algorithmic Accountability Act that would mandate impact assessments for high-risk automated decision-making and hold firms accountable by requiring that they audit their products for bias and security risks. Similar efforts have also been made at the local level, with New York City mayor Bill DeBlasio assembling the Automated Decision Systems Task Force in May 2018 to monitor the role of algorithms in municipal government, especially in regard to criminal justice and law enforcement. Although existing federal legislation to regulate the proliferation of artificial intelligence is by all means insufficient, these steps are nonetheless in the right direction and could show promise if scaled up. And despite the tangible consequences an AI catastrophe would have for the American populace, calling into question the economy and the state of civil rights in the digital age, even frontrunners for Democratic nominee in the upcoming 2020 election have failed to impress with their policies on the issue, stoking fear that we lack a solid plan of action moving forward when it comes to governance.

Legislative change is imperative, but tackling the problem from the root would call for an approach that also fundamentally reworks how we create technology. This paradigm shift in thinking, drawing from disciplines across law and the social sciences, finally confronts the ethical queries that have long been set aside in favor of more scientific dialogue. At a time when 2.5% of Google’s workforce is black and only 10% of its AI researchers are women, promoting diversity is a crucial step to curbing bias, since algorithms are defined by the people who develop them. We must reevaluate how we train our models, search for more socially conscious methods, diversify our datasets, establish commissions across the public and private sectors to screen for discrimination, and incentivize new research in bias detection and removal. Other shortcomings lie in both our pre- and post-processing techniques, an optimization problem new findings in machine learning seek to answer. We need a two-pronged solution, one that encompasses both the legal and technical aspects of algorithmic injustice.

Ultimately, artificial intelligence shows remarkably high potential for transforming our quality of life — but without appropriate oversight, it can inadvertently amplify the human inadequacies it is intended to rectify and jeopardize the ideals of transparency and liberty. By taking a closer look at who designs the algorithms that have begun to dictate our lives and govern institutions as well as how we go about it, we can construct a more equitable future assisted rather than devastated by machines.

--

--