Don t let industry write the rules for AI
Industry has mobilized to shape the science, morality and laws of artificial intelligence. On 10 May, letters of intent are due to the US National Science Foundation (NSF) for a new funding programme for projects on Fairness in Artificial Intelligence, in collaboration with Amazon. In April, after the European Commission released the Ethics Guidelines for Trustworthy AI, an academic member of the expert group that produced them described their creation as industry-dominated ethics washing . In March, Google formed an AI ethics board, which was dissolved a week later amid controversy. In January, Facebook invested US$7.5 million in a centre on ethics and AI at the Technical University of Munich, Germany.
Companies input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.
Algorithmic-decision systems touch every corner of our lives: medical treatments and insurance; mortgages and transportation; policing, bail and parole; newsfeeds and political and commercial advertising. Because algorithms are trained on existing data that reflect social inequalities, they risk perpetuating systemic injustice unless people consciously design countervailing measures. For example, AI systems to predict recidivism might incorporate differential policing of black and white communities, or those to rate the likely success of job candidates might build on a history of gender-biased promotions.
Inside an algorithmic black box, societal biases are rendered invisible and unaccountable. When designed for profit-making alone, algorithms necessarily diverge from the public interest information asymmetries, bargaining power and externalities pervade these markets. For example, Facebook and YouTube profit from people staying on their sites and by offering advertisers technology to deliver precisely targeted messages. That could turn out to be illegal or dangerous. The US Department of Housing and Urban Development has charged Facebook with enabling discrimination in housing adverts (correlates of race and religion could be used to affect who sees a listing). YouTube s recommendation algorithm has been implicated in stoking anti-vaccine conspiracies. I see these sorts of service as the emissions of high-tech industry: they bring profits, but the costs are borne by society. (The companies have stated that they work to ensure their products are socially responsible.)
From mobile phones to medical care, governments, academics and civil-society organizations endeavour to study how technologies affect society and to provide a check on market-driven organizations. Industry players intervene strategically in those efforts.
via www.nature.com
This is from May 2019 but it still relevant as far as I know. A story of industry capturing national and global governing bodies.