Please enable JavaScript for this site to run correctly.
Posted 2019-11-04 13:39:48 by Vian Bakir and Andrew McStay, Bangor University

 

Vian Bakir and Andrew McStay regularly engage with policy-makers in the digital media field, under the auspices of the Network for Study of Media & Persuasive Communication and the Emotional AI lab.

In 2019, one of their written submissions was to the Digital, Culture, Media and Sport Committee’s Inquiry into Reality TV . This inquiry was set up after the death of a participant following filming for The Jeremy Kyle Show and the deaths of two former contestants in the reality dating show Love Island. The Inquiry is considering production companies’ duty of care to participants, and asks whether enough support is offered both during and after filming, and whether there is a need for further regulatory oversight in this area.

Bakir and McStay’s submission drilled down into the Inquiry’s question: What is the future for reality TV of this kind? How does it accord with our understanding of, and evolving attitudes to, mental health? Their response was titled Datafied Bearbaiting and Emotional AI: Anticipating the Quantified Jeremy Kyle Show .

Bakir and McStay observe that we live in a world where the media industry is keenly attuned to the value of emotions. This includes advertisers who speak of emotional sales propositions, nudges and behavioural hacks; programme developers who see that emotion and strong authentic reactions generate audiences and audience engagement; and a news environment that is increasingly emotionalised. Beyond generalised emotion, data about emotion is increasingly being collected and used for diverse purposes. Channels, including the BBC, have employed emotional AI companies to optimise and emotionally tweak their programming and marketing.

Bakir and McStay offer a cautionary note on a likely future of reality media - where the media industry makes use of data about emotions to add new layers of engagement through “emotional AI”. Such emotional AI consists of affective computing and AI techniques that read and react to emotions through text, voice, computer vision and biometric sensing.

They observe that while usage of emotional AI has scope to enhance audience’s experience of media, there is scope for abuse when competing for audience attention, engagement and advertising revenue. They recommend that regulators and policymakers should be aware of the potential dangers of a media environment in which emotion is quantified and utilised by the media industry.

To illustrate the potential for abuse, they contemplate possible scenarios. One of these is the uncancelled Jeremy Kyle show, early 2020s: where lie detection tests are not conducted in private, but in real-time via a range of sensing types. (Vendors of facial coding, voice analysis and other biometric services all claim to be able to detect lies and sincerity.) Another scenario is Big Brother, early 2020s: to maximise audience engagement, members of the house are asked to wear wearables that track their heart rate, respiration and skin conductivity to gauge their affective and emotional states. This emotional AI data is used to optimise live footage for story development within the show, by modulating the emotional interactions of the reality show participants: for instance, creating situations that encourage participants to express more conflict, hatred, jealousy or love.

Their submission identified numerous potential harms arising. These include:

a) That emotional AI systems have not been subject to independent scientific scrutiny.
b) The desirability of having one’s emotions profiled in the first place (not least because of privacy and data protection concerns).
c) Exploitation of people on-screen (coercion, manipulation or decisional autonomy).
d) Opacity of decision-making about emotional states and how a citizen would challenge it.
e) The problem of universality: proponents of facial coding are adamant that “basic emotions” are universal, but what of ethnocentric considerations; representative training data; and individual-level variation in affective reactions and emoting?
f) The risk of creating an understanding of emotional life that suits data analytics, but not people.

Bakir and McStay will be digging into digital futures and concerns about their governance in a new MA to be launched in 2020 – MA Digital Futures and Governance. With an explicit focus on solving real world problems of the imminent digital future, it will be drawing on a range of their policy submissions to various governance bodies.

For instance, across 2019, Bakir and McStay made other submissions to parliament. These include:

- An invited submission to the All-Party Parliamentary Group (APPG) on Electoral Campaigning Transparency. Their submission was called: CULTURE CHANGE: Incentivise political campaigners to run civil and informative election campaigns

- An invited submission to House of Lords Select Committee on Democracy & Digital Technologies. Their submission (not yet published) was called: AGAINST OPACITY, OUTRAGE & DECEPTION: Towards an ethical code of conduct for transparent, explainable, civil & informative digital political campaigns

Vian Bakir is Professor in Journalism & Political Communication at Bangor University, Wales, UK. Her research investigates: surveillance of data; the security state and public accountability; and disinformation and deception in journalism.

Andrew McStay is Professor of Digital Life at Bangor University, UK. See his website Emotional AI for live projects, but he publishes on issues central to the digital economy, including privacy and artificial intelligence. His recent book, Emotional AI: The Rise of Empathic Media, examines the impact of technologies that make use of data about emotional life.