Bumble has stepped up its efforts to provide a safe dating space for users by banning body-shaming on the app.
The app will use an algorithm to flag derogatory comments about someone’s appearance.
“This includes language that can be deemed fat-phobic, ableist, racist, colorist, homophobic or transphobic,” the company said in a statement.
Human moderators will then review People who use offensive language in their profile or on Bumble’s chat function will first receive a warning for their behavior. Repeated incidents or particularly harmful comments will result in a permanent ban.
If the derogatory language has slipped through the AI net, you can also report it within the app.
We’ve asked the firm for further details on how the algorithm works, and will update this article when we have more information.
The new safeguard isn’t Bumble’s first attempt to tackle abuse with AI.
Last June, the app launched a feature called Private Detector that automatically identifies and blurs unsolicited nude photos sent on the platform. The company said the system captures explicit images in real-time with 98% accuracy.
The new feature arrives in a boom period for dating apps, as singles stuck at home through long lockdowns are forced to look online for love.
The industry has responded to the shift to distance dating by introducing new features like in-app video chats and virtual events. At Bumble, the moves appear to be paying off: the company reported a 35.8% year-over-year increase in revenue in its recent IPO filing.
Even casual sex site Feeld is now encouraging users to stay at home, by launching virtual locations called “Fantasy Bunker” and “Remote Trios.” My innocent mind can only imagine what goes on behind those screens.
But now the novelty of virtual dating is wearing off, expect sex through apps to explode during 2021.
Published January 28, 2021 — 12:44 UTC
This article was first published on The Next Web