As the global AI Safety Summit 2023 begins in Bletchley Park, Buckinghamshire, Refuge Research Lead, Dr Michaela Bruckmayer outlines concerns for the safety of women and girls as technology is weaponised as a tool for domestic abuse.
Last week, the Online Safety Act received Royal Assent, heralding a new era of internet safety and choice by placing world-first legal duties on social media platforms. Dr Michaela Bruckmaver said that this was after “two years of tireless campaigning with other VAWG (violence against women and girls) sector colleagues”. She continued:
“The Online Safety Act criminalises the sharing or threatening to share of intimate images known as ‘deepfakes’ a common form of intimate image abuse, whereby perpetrators use AI or computer technology to create sexualised or otherwise abusive images to harass, intimidate and abuse women and girls.
As policy makers and experts consider the next steps of how Ofcom guidance within the Online Safety Act will be able to deliver these much-needed protections for women and girls and clamp down on online abuse, they must evaluate the dangers of AI technology being misused by perpetrators of domestic abuse.”
Dr Michaela Bruckmaver stated that the AI summit is meant to be about AI safety but “once again women and girls have been forgotten”. She continued:
“It is extremely disappointing that no VAWG organisations appear to have been invited to this AI Safety Summit, to add their voice and discuss how we can all ensure women and girls are safer from artificial intelligence technology being used against them when it comes to intimate image abuse and other forms of technology-facilitated domestic abuse. Refuge urges the governments, leading AI companies & and technological experts to consider the risks of AI to women and girls and give women a voice in this important conversation. Women’s safety cannot be forgotten, we need a seat at the table.”