๐Ÿ“ธ

Project Lead @ Better Images of AI

Services
Program Management
Strategy
Writing

Better Images of AI

Strategy ยท Program management ยท Writing
source: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0
source: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

Overview

The dominant imagery of AI reinforces dangerous misconceptions, and at best limits the public understanding of the current use and workings of AI systems, their potential and implications. AI4Media teamed up with Better Images of AI / We and AI and AIxDESIGN to create & curate a season of artist commissions & community open call to re-imagine a better visual language for AI.
ย 
Project objectives:
  • Commission 10+ new images that avoid perpetuating unhelpful myths about artificial intelligence,
  • Made by artists working across intersectionally diverse experiences and perspectives of AI;
  • Document the practical opportunities and challenges for image-makers creating images of AI; and
  • Test and document a community-led approach for Better Images of AI image commissions.

Approach

Insights

Outcomes

Feedback

Approach

We began this research with three hypotheses:
  1. The use of personal data in AI can result in algorithmic discriminatory harm;
  1. Design interventions, specifically service design and UX design, can help mitigate discriminatory harms created by AI-driven products and services;
  1. Data protection guidance can help mitigate discriminatory harms created by AI-driven products and services.
We developed the following mixed methodology approach to explore these hypotheses and identify how the ICO might support AI product teams to identify, assess, and address discriminatory harms.
  1. desk research,
  1. expert interviews, and
  1. collaborative workshops (hosted by our partners AIxDesign)
  1. stakeholder working sessions.
ย 
We created this approach for the following reasons:
  • An insights-led approach, informed by desk research and community advocates, offered valuable insights into the current state of non-discriminatory algorithmic design and guidance, while remaining time- and cost-effective.
  • These insights helped us to identify areas of strategic interest for focused and collaborative workshops with AI product teams, to better understand and evidence the challenges they face, and the support they need.
  • Following the lessons of anti-discriminatory practitioners and organisations like equity X design, Design Justice Network, Hera Hussain โ€“ author of trauma-informed design and Founder & CEO of Chayn, and antiracistby.design, we sought to prioritise the people and communities disproportionately impacted by algorithmic systems โ€“ namely women and people of colour โ€“ย because they are the best placed to tell us about the human impact of the algorithmic harm.
Outcomes