Video

Speakers

Aad van Moorsel is a Professor in Computer Science specialising in cyber security, specifically interdisciplinary research at the intersection of computing, psychology, and social sciences.  He has been Principal Investigator of several interdisciplinary projects, funded by UK, EU, and USA funding bodies.  His work includes the study of economic mechanism design to incentivize healthy security behaviour, the research in choice architecture for that purpose and more recently the study of trustworthy AI systems. 

Karen Elliott is an Associate Professor in Enterprise/Innovation, specialising in socio-technical interdisciplinary research between business, technology, and social sciences. Named ‘Standout #35 Women in FinTech Powerlist by Innovate Finance’, she is the Co-Investigator of FinTrust (EPSRC) and Finclusion (Gates Foundation/Turing Institute) projects with Prof van Moorsel and forms part of the IEEE Ethical AI and ForHumanity Committees. Her work examines FinTech, trust, digital ethics, and Corporate Digital Responsibility (CDR), to promote an equitable digital society.

Summary

The online world is for many people a curious but uncertain one. It enriches many facets of life but at the same time exposes citizens to a variety of threats that may cause harm to them, their loved ones and wider society. There is growing evidence that many such harms result from a complex interaction of societal processes driven by diverse stakeholders (e.g., Mehrnezhad 2021). These complex harms tend to happen to citizens, and, in most cases, they are not purposely caused or easily controlled by citizens. The AGENCY project is motivated by the firm belief that establishing citizen agency is a sine qua non for any transformative approaches that reduce these complex harms. That is, citizens need to be empowered through technologies and user-centred tools that enable them to gain a sense of control, ownership, security, and consequently trust and assurance in their online activities. Engaging with the general UK population and identifying demographic markers that intersect with complex harm, AGENCY aims to establish interdisciplinary co-design principles, technology foundations and collaborative governance procedures to assure online citizen agency in the presence of multiple stakeholder interests. If AGENCY succeeds, it will provide a profound understanding of the role of agency in reducing complex online harm and will deliver collaborative methods, technological building blocks and scientifically grounded best practices for our society to provide more proactive and structured approaches to protecting citizens online.

Curated Notes

AGENCY is a new research programme that is in Initiation phase and will formally start in September 2022.  Funded by UK GOV through UKRI - 3 year program where 10 people will look at agency on the internet, multi-disciplinary approach (culture, legal, business, design, computer scientists interested in AI, IoT & distributed systems.  All will come together to work on a number of case studies, where we ant to explore AGENCY where results in online harm and / or a feeling of safety on the Internet, 4 case studies

1. Smart Home,

2. Identity Mgt, How can people use these technologies productively and whilst retaining control of data;

3. Fake News working w/ the BBC - misinformation & disinformation in news and in financial system;

4. Personal well-being & health; particular tech used through wearables & apps, e.g. female health technologies working w/ Swiss Health, specialist in female health products; 

Participatory & Co-design approach: Work with local groups on the ground, e.g. women who are victims of domestic violence, co-design w/ all stakeholders  Co-design is key to success of this project. 

Divided into different work packages - will mainly be focused on current legislation and regulation (e.g. from FCA Financial Conduct Authority in UK) pushing 'Consumer Duty'- How and when are businesses checking for vulnerabilities.  Heavily focused on Corporate Digital Responsibility.  Shift in focus of responsibility to organizations otherwise consequences in ESG and UNSDG outcomes.  Many considerations with current UK Online Safety Bill - part of the research will be focused on boundaries and alignment with these through tech.  Goal is to protect the consumer in the end - this goes across Economics, Environment Sustainability, and importantly the Law.  Last work package is business focused: the cost/benefit analysis for businesses particularly with respect to vulnerable customers.

An adjacent research programme which will also launch in September 22 is for a Fintech network part of which includes funds for distribution in collaboration w/ academics - common themes are co-creation with academia and business and folks on the ground.

Profit must have a purpose not conflict w/ purpose together w/ethics by design

Want to collaborate w/ groups like ToIP to understand different ways of addressing harms and developing robust business models.

  • Invitations to participate in workshops around each of the subjects will come as the AGENCY programme gets under way 
  • Membership of Fintech network will be possible for many partnerships are especially important.  Want to set up a network of researchers and innovators with an academic interest.  

DISCUSSION:

Human First Approach: Karen is already an ambassador for Digital Poverty Alliance, background in behavioural psychology & sociology so always start with human, 

What is Agency?  this is why we work x-disciplinary perspectives - what is agency and how is that interpreted online and how does it play out online.  Will be addressed in a fundamental way in the project

Standards for Informed Consent How easy is it for an individual to give informed consent to data use / processing - are there ways that we can shape a standard for some of this? Karen: Yes this is part of the Consumer Duty Framework - if informed then you have agency - but then we come back to harms (e.g. coercion) - hard to define like Trust - has starts to have consequences in law - so the contractual / consent process is key.  

Potential approach e.g. Standard visual practice - for communicating complex consents and what the data is used for and how it is stored. Standards angle is interesting, addressed in AGENCY project so that the designers in the project co-design w/ stakeholders.  In this standardized approaches dont work unless set up that way.

Smart Home - how do you intend to go about addressing the needs in Smart Home - you have no idea what they're ingesting, how they're going to use it, etc How do you go about managing this? 

Aad: New House here in Newcastle, Google Home installed took me 6 months to turn it off and still control the system.  Even as a computer scientist.  Key issue is why should any data go to the provider?  Also looking at future homes including ambient assisted living, Also some  foresight techniques future thinking - in Newcastle will build some of these smart homes.

Karen: Also sit on a group focused on Dementia support - always presented with centralized solutions.  Who's business model is it?  Who's profiting. Almost no control, transparency, unethical billing or new service packages.  Agency is a key factor in resilience & capability indicators of vulnerability

Anita Rao - when we sign up for these services and when they collect the data legal basis is always about 'improving services' - if it is not aggregated and anonymized and is personalized what are the consequences.

Karen: good questions, but tech is not the panacea, who is really benefiting, still the profit/purpose debate - profit still wins.  Better clarity needed on what this means for me and my data.

Rumsfeld Discussion How about the known unknown data users, not just big known ones like Meta & Google.  EG childrens' toys using AI.   

Karen: Main focus is on financial sector, but understand that this works for non trad finance: - established companies using in a pretty well regulated way, whereas start-ups are under the radar.  Need courageous leadership from influencers to address harms early on; regulators should not always be playing catch-up.  

Andrew: Related research of interest https://anatomyof.ai.  Note changing interaction models moving from physical, to screen based, to voice based or in gaming environments with blended & virtual realities. Might be signing up in new and unusual ways.  New models of modes of interaction in research will also be interesting. 

CONCLUSION:  Future beyond AGENCY:

Aad - decentralized architectures to make more practicable and meaningful for people; something start-up on tech that we develop; law & business hope will be targeting influence on policy

Karen - law school will support policy influence, maybe also policy standards, also needs to be dynamic and able to change, maybe through an agile process.  We don't want to reinvent what ToIP are doing but elevate to a new level, back up with evidence and the research to give stronger tools to policy and governments.  What can we do to make positive change to people's lives.  There will be a spin-out, best practice guidance.

Anita: Thank you this was a great discussion. Very important work. Please share progress and let us know how we can help.

Links & Further Reading

Folder in gdrive with related research papers 

Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context (2021)

The relationship between trust in AI and trustworthy machine learning technologies (2020)

Know Your Customer: Balancing Innovation and Regulation for Financial Inclusion (2020)

Towards an Equitable Digital Society: Artificial Intelligence (AI) and Corporate Digital Responsibility (CDR) (2021)

Fintrust https://fintrustresearch.com/

From Vikas Malhotra Human first approach - not consumer or customer always = see https://www.woplli.com/architecture-principles

UK Financial Conduct Authority Consumer Duty Principle  https://www.fca.org.uk/news/press-releases/fca-introduce-new-consumer-duty-drive-fundamental-shift-industry-mindset

Digital Poverty Alliance,

From Andrew Slack Related research of interest https://anatomyof.ai





  • No labels