How Organisation And Scholarship Are Holding AI Accountable
The rapid integration of AI into all aspects of society raises urgent questions and presents significant dangers. AI technologies have been introduced in a range of sectors, with claims that automating sensitive social decisions improves outcomes and increases efficiency. From facial recognition and predictive policing data to automated HR – if these tools are being threaded through some of our most sensitive social institutions, what are the guardrails, and how do we ensure that patterns of discrimination are not replicated? Mounting evidence shows that these systems often produce harmful and biased results, in ways that are hard to contest and often hidden behind corporate secrecy. Meredith Whittaker is a Distinguished Research Scientist at New York University (NYU) and the Co-founder of the AI Now Institute at NYU, which investigates the political and social implications of artificial intelligence. Meredith has worked extensively on matters of privacy and security, advising governments and civil society organisations on both policy direction and technical implementation. At Falling Walls, Meredith will stress the urgency of addressing the problems of biased or harmful AI, and call for a serious structural change – including greater accountability – in how technology is developed and how tech corporations are run, before bad AI infiltrates our infrastructure and it’s too late.