Shared-Posthuman Imagination:

Human-AI Collaboration in Media Creation

Investigating the use of generative AI tools

in media creation in the context of responsible AI.

Undoubtedly, generative AI is revolutionising media production, changing the parameters of what constitutes creativity, authorship, and ownership in unprecedented ways. Now more than ever, users of generative AI can produce stories, scripts, images, music, and even entire films simply by prompting widely available, often free-to-use AI models that have been trained on large datasets through machine learning processes. However, these same technological breakthroughs have also brought with them a series of moral, ethical, and legal challenges that need to be addressed as a matter of urgency. 

My dream version of the posthuman embraces the possibilities of information technologies without being seduced by fantasies of unlimited power and disembodied immorality, that recognizes and celebrates finitude as a condition of human being, and that understands human life is embedded in a material world of great complexity, one on which we depend for our continued survival”


N. Katherine Hayles

Summary

The project has received funding from the Arts and Humanities Research Council (AHRC) as part of the Bridging Responsible AI Divides (BRAID) programme. The project set out to interrogate and understand the impact that generative AI tools are having on the concepts of creativity, collaboration, and bias within media production, where questions of control, agency, skill, labour, exploitation, and representation are particularly pertinent to the hopes and fears of the creative industries. Bringing together stakeholders from different parts of the generative AI media landscape, we sought to foster relationships among developers, filmmakers, policymakers, and end users. By facilitating discussions across AI divides, this project revealed a complex landscape in which the perceptions, impact, and applications of AI tools vary across different industrial contexts. 

Research questions 

  1. How is the notion of media creativity being re-evaluated within a context of responsible AI, and how can we ensure that augmentations to human creativity happen in ways that protect against extractive database practices, intellectual property infringements, and displacements of human labour?  
     
  2. What are the implications of human-AI collaboration, and how can we make sure that collaborative work including AI tools is accountable, just and accessible to all? 
     
  3. How does the use of generative AI in media production perpetuate social biases, and how can we ensure that there is justice, transparency, and safety regarding the training of the large language models on which AI tools are built 

Methodology

A significant part of our project involved a series of four workshops covering the stages of media production—Screenwriting, Image Creation, Editing, and Sound and Music with AI. The workshops attracted 192 registrants, with 110 survey responses collected. These sessions were an invaluable opportunity for participants—including academics, media producers, and industry professionals—to engage directly with AI tools and develop competency in integrating them into media production. Survey feedback indicated significant improvements in participants’ understanding of AI’s operational, ethical, and practical aspects, with many expressing an increased awareness of biases within datasets and the legal implications surrounding AI-generated content. We developed a comprehensive stakeholder map, mapping these relationships and identifying potential beneficiaries, thus clarifying the broad impact GAIT may have on various sectors.

To consolidate insights from the workshops, we organised an Expert Bridging Group (EBG). We invited 12 experts, spanning academia, film, and industry, who shared their experiences on issues such as ethics, IP, and creative autonomy in AI-driven media production. The EBG further strengthened the interdisciplinary and international character of our research group, integrating diverse perspectives and expertise into our recommendations.

Outputs

Report

A comprehensive report including an interpretation of the data gathered from the workshops, policy recommendations and best practice recommendations and other resources.

Policy Recommendations

12 point addressing challenges in four main areas: Authorship/IP, Labour, Diversity and Accessibility, Creative education and Communication.

Best Practice Recommendations

Based on seven core principles these offer practical guidance for the responsible use of generative AI.

Zine

An accessible pocket version of the report.

Workshop Videos

Recordings from the workshop presentations.

Core Principles

Seven principles for the responsible engagement with generative AI in media.

Research Team

Research Team

Dr Liam Rogers

Dr Selin Gurgun

Boyuan Cheng

Dr James Slaymaker

Stephanie Prajitna

International Co-Investigators

Dr Catherine Griffith

Prof. Kejun Zhang

Illustration and Design

Ellie Shipman