A new post from Payton Iheme, Vice President, Head of Global Public Policy at Bumble discusses how the dating platform is dealing with other organisations to attend to dishonest usages of AI.

The current column highlights that AI chatbots and artificial media (AI-created images, text, music, and so on) are acquiring attention. While these developments are interesting, she cautions that these tools can be used in a damaging way.

AI-created images might lead to problems such as deepfake pornography, where people’ similarity is used to develop sexually specific media versus their authorization.

“If women and folks from underrepresented groups do not have a seat at the table at the genesis of new innovations, we’re, as the expression goes, on the menu. We should have a voice in the extremely production of this emerging media, not simply the conversation surrounding its advancement”, Iheme composed.

To resolve these concerns, Bumble has actually been working behind the scenes with Partnership on AI, a non-profit union which concentrates on ethical AI usage. Bumble is partnering with organisations such as the BBC, Adobe, TikTok, OpenAI, Synthesia and more for the launch of the union’s new structure for accountable artificial media.

This isn’t the very first time Bumble has actually assisted address online misogyny. It just recently made its AI detection tool, which minimizes the sharing of undesirable salacious images, open source. It likewise worked along with lawmakers in the UK and United States to criminalise cyberflashing.