Using auto-moderation to improve community interactions

Flickr has always been a global community of photographers and we remain committed to supporting artistic expression through photography in all its forms, including explicit photos. In order to cater to many different audiences around the world with different standards for explicit content, we worked with our community from the beginning to develop moderation guidelines. We rely on those guidelines to make sure people only see the type of content they want to see.

In that spirit, we’re introducing new tools and technologies that will help to ensure the content community members see on Flickr meets their expectations.

Because of the incredible number of photos uploaded to Flickr every day, we are introducing the Flickr Moderation Bot. Moderation Bot will detect explicit content from new uploads and will automatically update mis-moderated content to the correct moderation levels according to our established policies. If the system ever detects mis-moderated content in your account, you will always receive a private notification under the bell icon that lets you know about the mismatch and directs you to the photo in question.

We’ll begin by ramping up auto-moderation on new uploads and monitor for a number of factors, including possible false positives. With any large-scale machine learning system, there will be needed adjustments as we dial in the technology. Moderation of uploaded content will always be the Flickr member’s responsibility, and you must not rely on Moderation Bot to do the job for you — however, we hope this tool will help avoid accidental mis-moderation and make the Flickr experience better for everyone. Eventually we plan to also backfill auto-moderation of a number of Flickr photos, such as when members update cover photos and avatars.

Additionally, in the new year, we’ll be overhauling our Report Abuse flows to bring greater flexibility and specificity to reporting. Instead of a few broad categories, we will be introducing a tiered system that directs you to be as precise as possible with your report. We’ll be expanding several of the reporting categories of highest priority so that our community can continue to help us eliminate spam, improperly moderated content, and illegal content from our site. As we have a number of community members who are dedicated to finding and reporting content that violates our Community Guidelines, we want to save their valuable time, and the valuable hours for staff who follow up on those reports.

You’ll still be able to report abuse from the link in the footer of every Flickr page, but we’ll also bring the same tools to the Flag Photo feature on every photo page. By adding these entry points, we hope that we can facilitate quality community interactions while limiting the disruptions of bad-faith actors.

We’ll share any relevant updates with you and we would love to hear from you if you have any questions or encounter any issues.

Flickr Team