Keynotes

Prof. Karen Yeung

Lost in translation: the troubling logics underpinning the embrace of governmental machine-learning based prediction tools for ‘citizen scoring’

The devastating impact resulting from the take-up of automated decision-making systems in the public sector across several jurisdictions, including but not limited to those that rely on machine learning (ML) applications, is undeniable.  Amongst the most well-known is the Dutch child benefit scandal, the Australian robo-debt fiasco, and the UK Postmasters Horizon disaster.  In this keynote, my concern is not with the harms themselves (which are clearly shocking and self-evident),  but with the underlying reasoning and logics that have led to the production of these harms.  I will focus on the use of ‘predictive analytics’ or ‘big data analytics, now ubiquitous in retail, entertainment and logistics, that are increasingly common in public sector contexts which claim to estimate an individual’s ‘risk’ of specific behaviours, such as an offender’s likelihood to reoffend or the likelihood that a child will be subject to abuse or neglect.

My lecture springs from the premise that the embrace of these data-driven ‘citizen scoring’ systems is underpinned by a set of promises, assumptions, beliefs and rationalities (collectively referred to as ‘logics’) that seek to replicate the success of ML in commercial contexts into the public sector with no regard for the fundamental differences between the two contexts.

I critique three specific claims that have encouraged the adoption of commercial ML techniques by the state: (a) that ML produces more accurate predictions (b) that these predictions offer valuable ‘actionable insight’ for public authorities, and (c) that ‘early intervention’ based on such actionable insight is desirable.

I argue that although it may be legitimate for profit-seeking firms to use probabilistic estimates derived from algorithms to inform low-stakes decisions (such as identifying which web-ads to display to users to encourage more clicks), far more significant state interventions such as denying the early release of a prisoner due to their perceived risk of reoffending or taking a child into care identified as ‘at risk’, cannot be justified on the same terms. Yet, thanks to of the uncritical adoption of commercial ML methods in the public sector, power and authority are being illegitimately, and sometimes unlawfully, redistributed in ways that produce injustice and without public awareness or democratic debate to the detriment of some of the most vulnerable members of society.

Bio

Karen Yeung is Interdisciplinary Professorial Fellow in Law, Ethics and Informatics based at Birmingham Law School and School of Computer Science, following previous appointments as Professor of Law at King’s College London and Tutorial Fellow in Law at St Anne’s College, Oxford.
She is recognised worldwide as a leading scholar in the governance of AI, with her on-going work highlighting how AI systems can undermine the social conditions necessary for democratic governance, human rights and the rule of law. Yeung has spent the last 10 years of her almost 30-year academic career examining the legal, ethical, social and democratic implications of automation and the ‘computational turn’.
She has been extensively involved in many technology policy initiatives at the European and international level, including membership of the EU’s High Level Expert Group on AI and the Council of Europe’s Expert Committee on the human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) (2018-2020), during which she served as Rapporteur, producing a research report, Responsibility and AI which offers a critical analysis of the characteristics of machine learning systems in contemporary society that make them prone to undermining human rights while eroding democratic freedom, prompting important questions about responsibility for those impacts.[1]
Within the UK, she is a member of the Strategic Advisory Board of UKRI’s Trustworthy Autonomous Systems programme, the Royal Academy of Engineering’s Steering Group on Technology Pathways and Meaningful Innovation and expert advisor to the Digital Regulators Cooperation Forum (DRCF), having previously served as a member of the Royal Society and British Academy, Working Group on Data Governance, ethics and expert advisor to the Topol Independent Technology Review for the NHS (2018-2019) and Chair of the Nuffield Council on Bioethics, Working Party on Genome Editing and Human Reproduction (2016-2018).   She actively supports many policy initiatives and civil society organisations that seek to nurture human rights and democracy in a digital age, including UN’s Global Judicial Integrity Network, the Organization for Security and Cooperation in Europe (OSCE) Office for Democratization and Human Rights (ODIHR).
Recent books include Algorithmic Regulation (co-edited with Martin Lodge) Oxford University Press (2019) and The Oxford Handbook of Law, Regulation and Technology (co-edited with Roger Brownsword and Eloise Scotford) in 2017 and is currently completing the manuscript on a second edition of An Introduction to Law and Regulation (with Sofia Ranchordas) for publication in 2024.  Yeung serves on the editorial boards of a several leading peer-reviewed academic journals, including Big Data & Society, Data & Policy, Public Law, Technology and Regulation and the Journal of Cross-disciplinary Research in Computational Law. 

Prof. dr. Lokke Moerel

Why GDPR is not fit to regulate the metaverse

Friend and foe agree that privacy and cyber risks vastly exacerbate in the XR experience of the metaverse. To live a digital life, exponentially more and new types of data are collected, digitizing not only behavior but also inner emotions, resulting in deeper profiling and surveillance of users, which already turned current social media into a polarizing force. First experiences with the metaverse further show that security breaches lead to new risks to safety of users. Hardware (like head-sets) can be weaponized by bad actors and physically harm users. Sexual harassment is already an issue on current digital platforms, but ‘groping’ in virtual reality is interpreted by our brains as an actual threat and equally traumatic. In other words, security-by-design must be stretched into a broad assessment of safety-by-design.

Because compliance is in the design of new technologies, important design decisions are made by developers. Because so many individuals of so many companies are involved in the development process, the ‘problem of the many hands’ arises, where nobody ultimately has the overview and feels responsible for the complete end result. This is not without risks; coding carries the power to affect how we perceive the world. Powered by artificial intelligence, digital environments shape themselves, they propel issues to the fore or make them disappear. In short: technology exerts power; that power will only grow with the metaverse and is currently entrusted to those who write the code. We have no time to loose. The world’s largest tech companies forecast that they will be able to launch their metaverse consumer products within the next three to five years. Rather than having the societal debate as an afterthought after the metaverse has materialized (and becomes difficult to change), we should get ahead of the game. But how can we ensure that experts, regulators and stakeholders are involved at the front-end of developments, rather than being left with enforcing and litigating privacy issues as a last resort? This keynote discusses the issues that prevent the GDPR from being effective in regulating the metaverse and provides concrete legislative proposals how we can turn the tide.

Bio

Among the world’s best-known privacy & cyber advisors, Lokke Moerel is regularly called upon by some of the world’s most complex multinational organizations to confront their global privacy and ethical challenges when implementing new technologies and digital business models. Lokke is Senior of Counsel with the global technology law firm Morrison & Foerster, professor of Global ICT at Tilburg University, member of the Dutch Cyber Security Council (the advisory body of the Dutch cabinet on cybersecurity), and member of the Monitoring Committee Dutch Corporate Governance Code. Lokke received many international rewards, and recently was recognized by Global Data Review ‘Women in Data 2022’ for being at the cutting edge of legislation, regulation and technology around the world.

See for a publication on Algorithmic Accountability for AI applications:  Oxford Business Law Blog

See for her Tedx talk on AI & Ethics: https://www.youtube.com/watch?v=HPyHf4IWDQc

Dr Michael Veale

Data Protection and Encrypted Computation

Firms controlling significant technological infrastructures are today able to analyse, model and target individuals and communities, while claiming that no personal data ever leaves their devices. They use the language of privacy-enhancing technologies (PETs), but these technologies typically only preserve confidentiality, leaving many other rights, freedoms and ethical issues unaddressed. Data protection both aims to protect many rights and freedoms in a digital age, but itself hooks onto ‘personal data’ as its material scope, linking the regime intrinsically to issues of confidentiality. In this talk, I’ll show the nature of the challenge encrypted computing creates: not that data protection (always) fails to apply, but as currently understood, it fails to apply in a way that meets its original aims of rebalancing power in informationalised societies. Does the rise of encrypted computation mean we should adapt the tools of data protection, or do we need to think beyond it?

Bio

Dr Michael Veale is Associate Professor in digital rights and regulation at University College London’s Faculty of Laws. His research focusses on how to understand and address challenges of power and justice that digital technologies and their users create and exacerbate, in areas such as privacy-enhancing technologies and machine learning. This work is regularly cited by legislators, regulators and governments, and Dr Veale has consulted for a range of policy organisations includingthe Royal Society and British Academy, the Law Society of England and Wales, the European Commission, the Commonwealth Secretariat. Dr Veale holds a PhD from UCL, a MSc from Maastricht University and a BSc from LSE. He tweets at @mikarv.