Trends

Societal upheaval during the COVID-19 pandemic underscores need for new AI data regulations

As a long-time proponent of AI regulation that is designed to protect public health and safety while also promoting innovation, I believe Congress must not delay in enacting, on a bipartisan basis, Section 102(b) of The Artificial Intelligence Data Protection Act — my proposed legislation and now a House of Representatives Discussion Draft Bill. Guardrails in the form of Section 102(b)’s ethical AI legislation are necessary to maintain the dignity of the individual.

What does Section 102(b) of The AI Data Protection Act provide and why the urgent need for the federal government to enact it now?
To answer these questions, it is first necessary to understand how artificial intelligence (AI) is being used during this historic moment when our democratic society is confronting two simultaneous existential threats. Only then can the risks that AI poses to our individual dignity be recognized, and Section 102(b) be understood as one of the most important remedies to protect the liberties that Americans hold dear and that serve as the bedrock of our society.

America is now experiencing mass protests demanding an end to racism and police brutality, and watching as civil unrest unfolds in the midst of trying to quell the deadly COVID-19 pandemic. Whether we are aware of or approve of it, in both contexts — and in every other facet of our lives — AI technologies are being deployed by government and private actors to make critical decisions about us. In many instances, AI is being utilized to assist society and to get us as quickly as practical to the next normal.
But so far, policymakers have largely overlooked a critical AI-driven public health and safety concern. When it comes to AI, most of the focus has been on the issues of fairness, bias and transparency in data sets used to train algorithms. There is no question that algorithms have yielded bias; one only need to look to employee recruiting and loan underwriting for examples of unfair exclusion of women and racial minorities.
We’ve also seen AI generate unintended, and sometimes unexplainable, outcomes from the data. Consider the recent example of an algorithm that was supposed to assist judges with fair sentencing of nonviolent criminals. For reasons that have yet to be explained, the algorithm assigned higher risk scores to defendants younger than 23, resulting in 12% longer sentences than their older peers who had been incarcerated more frequently, while neither reducing incarceration nor recidivism.
But the current twin crises expose another more vexing problem that has been largely overlooked — how should society address the scenario where the AI algorithm got it right but from an ethical standpoint, society is uncomfortable with the results? Since AI’s essential purpose is to produce accurate predictive data from which humans can make decisions, the time has arrived for lawmakers to resolve not what is possible with respect to AI, but what should be prohibited.
Governments and private corporations have a never-ending appetite for our personal data. Right now, AI algorithms are being utilized around the world, including in the United States, to accurately collect and analyze all kinds of data about all of us. We have facial recognition to surveil protestors in a crowd or to determine whether the general public is observing proper social distancing. There is cell phone data for contact tracing, as well as public social media posts to model the spread of coronavirus to specific zip codes and to predict location, size and potential violence associated with demonstrations. And let’s not forget drone data that is being used to analyze mask usage and fevers, or personal health data used to predict which patients hospitalized with COVID have the greatest chance of deteriorating.
Only through the use of AI can this quantity of personal data be compiled and analyzed on such a massive scale.
This access by algorithms to create an individualized profile of our cell phone data, social behavior, health records, travel patterns and social media content — and many other personal data sets — in the name of keeping the peace and curtailing a devastating pandemic can, and will, result in various governmental actors and corporations creating frighteningly accurate predictive profiles of our most private attributes, political leanings, social circles and behaviors.
Left unregulated, society risks these AI-generated analytics being used by law enforcement, employers, landlords, doctors, insurers — and every other private, commercial and governmental enterprise that can collect or purchase it — to make predictive decisions, be they accurate or not, that impact our lives and strike a blow to the most fundamental notions of a liberal democracy. AI continues to assume an ever-expanding role in the employment context to decide who should be interviewed, hired, promoted and fired. In the criminal justice context, it is used to determine who to incarcerate and what sentence to impose. In other scenarios, AI restrict people to their homes, limit certain treatment at the hospital, deny loans and penalize those who disobey social distancing regulations.
Too often, those who eschew any type of AI regulation seek to dismiss these concerns as hypothetical and alarmist. But just a few weeks ago, Robert Williams, a Black man and Michigan resident, was wrongfully arrested because of a false face recognition match. According to news reports and an ACLU press release, Detroit police handcuffed Mr. Williams on his front lawn in front of his wife and two terrified girls, ages two and five. The police took him to a detention center about 40 minutes away, where he was locked up overnight. After an officer acknowledged during an interrogation the next afternoon that “the computer must have gotten it wrong,” Mr. Williams was finally released — nearly 30 hours after his arrest.
While widely believed to be the first confirmed case of AI’s incorrect facial recognition leading to the arrest of an innocent citizen, it seems clear this won’t be the last. Here, AI served as the primary basis for a critical decision that impacted the individual citizen — being arrested by law enforcement. But we must not only focus on the fact that the AI failed by identifying the wrong person, denying him his freedom. We must identify and proscribe those instances where AI should not be used as the basis for specified critical decisions — even when it gets it “right.”
As a democratic society, we should be no more comfortable with being arrested for a crime we contemplated but did not commit, or being denied medical treatment for a disease that will undoubtedly end in death over time, as we are with Mr. Williams’ mistaken arrest. We must establish an AI “no-fly zone” to preserve our individual freedoms. We must not allow certain key decisions to be left solely to the predictive output of artificially intelligent algorithms.
To be clear, this means that even in situations where every expert agrees that the data in and out is completely unbiased, transparent and accurate, there must be a statutory prohibition on utilizing it for any type of predictive or substantive decision-making. This is admittedly counter-intuitive in a world where we crave mathematical certainty, but necessary.
Section 102(b) of the Artificial Intelligence Data Protection Act properly and rationally accomplishes this in the context of both scenarios — where AI generates correct and/or incorrect outcomes. It does this in two key ways.
First, Section 102(b) specifically identifies those decisions which can never be made in whole or in part by AI. For example, it enumerates specific misuses of AI that would prohibit covered entities’ sole reliance on artificial intelligence to make certain decisions. These include recruitment, hiring and discipline of individuals, the denial or limitation of medical treatment, or medical insurance issuers making decisions regarding coverage of a medical treatment. In light of what society has recently witnessed, the prohibited areas should likely be expanded to further minimize the risk that AI will be used as a tool for racial discrimination and harassment of protected minorities.
Second, for certain other specific decisions based on AI analytics that are not outright prohibited, Section 102(b) define those instances where a human must be involved in the decision-making process.
By enacting Section 102(b) without delay, legislators can maintain the dignity of the individual by not allowing the most critical decisions that impact the individual to be left solely to the predictive output of artificially intelligent algorithms.
Source: TechCrunch

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button