The use of AI by public authorities

Track 01 – Re-imagining the digital public sphere

Automated systems are increasingly part of our everyday lives – from scanning faces in airports and train stations, to shopping recommendations, to using facebook posts for insurance purposes. These systems are usually based on massive data collections and profiling which raise concerns that go well beyond the protection of personal data and privacy.

A number of public authorities are already partnering up with companies to develop AI and discuss its use, leading to challenging questions for civil and human rights as well as democracy. For example, in the US, the police’s body cams footage is used to train machine vision algorithms for law enforcement. Germany recently started testing new voice recognition software that can tell which country migrants without documentation come from. Similar use of algorithms or automated systems are also in use in Italy, France, the UK, China, India and soon probably at all EU borders.

Faced with the raise of populist and Euro-sceptic movements across Europe, what does the increase use of AI by public authorities mean for human rights?

The goal of this session is to launch a process for the drafting of civil society recommendations for public authorities’ use of machine learning/AI and other automated decision making systems. Ideally, based on the outcomes of the panel, a first draft could be developed between January to May to be presented at RightsCon 2018.

Varoon Bashyakarla, Tactical Tech
Frederike Kaltheuner, Privacy International
Estelle Masse, Access Now
Jay Stanley, ACLU

Maryant Fernández Pérez, EDRi