Bumble announced another move Thursday, July 18, targeting AI-generated content on its platform. Users of the female-focused dating app will now be able to report profiles that may contain AI-generated photos or videos. "By introducing this new reporting capability, we can better understand how bad actors and fake profiles are using AI dishonestly, so our community can feel safe making connections," said Risa Stein, VP of Product at Bumble.
In support of its decision, Bumble said a recent survey found that more than 71% of Gen Z and Millennial respondents believe the use of AI-generated profile pictures and bios on dating apps should be restricted. Interestingly, a majority of those surveyed also reportedly believe that using artificial intelligence to generate photos of themselves doing things they have never done is the same as “phishing,” where people create online profiles with false information in order to deceive, defraud, or harass the victim.
In addition to the new reporting options, Bumble has recently rolled out several other features aimed at keeping users safe. Earlier this year, the dating application launched an AI tool called a deception detector that helps identify the fake configuration file. Bumble also automatically blurred the potential nude photo with AI and informed users that they had sent some inappropriate things.