Monday, May 07, 2018

Content Moderation at Scale, DC Version


Foundations: The Legal and Public Policy Framework for Content

Eric Goldman gave a spirited overview of 230 and related rules, including his outrage at the canard that federal criminal law hadn’t applied to websites until recently—he pointed out that online gambling and drug ads had been enforced, and that Backpage was shut down based on conduct that had always been illegal despite section 230.  Also a FOSTA/SESTA rant, including about supplementing federal prosecutors with state prosecutors with various motivations: new enforcers, new focus on knowledge which used to be irrelevant, and new ambiguities about what’s covered.

Tiffany Li, Yale ISP Fellow: Wikimedia/YLS initiative on intermediaries: Global perspective: a few basic issues. US is relatively unique in having a strong liability framework. In many countries there aren’t even internet-specific laws, much less intermediary-specific.  Defamation, IP, speech & expression, & privacy all regulated.  Legal issues outside content are also important: jurisdiction, competition, and trade. Extremist content, privacy, child protection, hate speech, fake news—all important around the world.

EU is a leader in creating law (descriptive, not normative claim).  There is a right to receive information, but when rights clash, free speech often loses out. (RTBF, etc.)  E-Commerce Directive: no general monitoring obligation.  Draft copyright directive requires (contradictorily) measures to prevent infringement.  GDPR (argh).  Terrorism Directive—similar to anti-material support to terror provisions in US.  Hate speech regulations.  Hate speech is understood differently in the EU. Germany criminalizes a form of speech US companies don’t understand: obviously illegal speech; high fines & short notice & takedown period.  AV Services directive—proposed changes for disability rights.  UK defamation is particularly strong compared to US.  New case: Lewis v. FB, in which someone is suing FB for false ads w/his name or image.

Latin America: human rights framework is different.  Generally, many free expression laws but also regulation requiring takedown.  Innovative as to intermediary liability but also many legislative threats to intermediaries, especially social media.

Asia: less intermediary law generally.  India has solid precedent on intermediary liability: restrictions on intermediaries and internet websites are subject to freedom of speech protections.  China: developing legal system. Draft e-commerce law tries to put in © specifically, as well as something similar to the RTBF. Singapore: proposed law to criminalize fake news.  Privacy & fake news are often wedges for govts to propose/enact greater regulation generally.

Should any one country be able to regulate the entire world?  US tech industry is exporting US values like free speech.

Under the Hood: UGC Moderation (Part 1)
Casey Burton, Match: Multiple brands/platforms: Tinder, Match, Black People Meet.  Over 300 people involved in community & content moderation issues, both in house and outsourced. 15 people do anti-fraud at match.com; 30 are engaged fulltime in content moderation in different countries.  Done by brand, each of which has written guidelines.  Special considerations: their platforms are generally where people who don’t already know each other meet. Give reporters of bad behavior the benefit of the doubt.  Zero tolerance for bad behavior.  Also not a place for political speech; not a general use site: users have only one thing on their minds. If your content is not obviously working towards that goal you & your content will be removed. Also use some automated/human review for behavior—if you try to send 100 messages in the first minute, you’re probably a bot.  And some users take the mission of the site to heart and report bad actions. Section 230 enables us to do the moderation we want.

Becky Foley, TripAdvisor: Fraud is separate from content moderation—reviews intended to boost or vandalize a ranking.  Millions of reviews and photos.  Have little to no upfront moderation; rely on users to report. Reviews go through initial set of complex machine learning algorithms, filters, etc. to determine whether they’re safe to be posted. A small percentage are deemed unsafe and go to the team for manual review prior to publication. Less than 1% of reviews get reported after they’re posted.  Local language experts are important.  Relevance is also important to us, uniquely b/c we’re a travel site.  We need to determine how much of a review can go off the main focus.  E.g., someone reviews a local fish & chips shop & then talks about a better place down the street: we will try to decide how much additional content is relevant to the review.

Health, safety & discrimination committee which includes PR and legal as well as content: goal is to make sure that content related to these topics is available to travelers so they’re aware of issues. There’s nobody from sales on that committee. Strict separation from commerce side.

Dale Harvey, Twitter: Behaviors moderation, which is different from content moderation. Given size, we know there’s stuff we don’t know. In a billion tweets, 99.99% ok is 10,000 not ok, and that’s our week. Many different teams, including information quality, IP/identity, threats, spam, fraud.  Contributors: have a voice but not a vote—may be subject matter experts, members of Trust & Safety Council—organizations/NGOs from around the world, or other external or internal experts.

Best practices: employee resilience efforts as a feature. The people we deal with are doing bad things; it’s not always pleasant. Counseling may be mandatory; you may not realize the impact or you may feel bravado.  Fully disclose to potential employees if they’re potentially going to encounter this.  Cultural context trainings: Silicon Valley is not the world.  Regular cadence of refreshers and updates so you don’t get lost.  Cross functional collaborations & partnerships, mentioned above.  Growth mindset.

Shireen Keen, Twitch: real time interactions. Live chat responds to broadcast and vice versa, increasing the moderation challenge. Core values: creators first.  Trust and safety to help creators succeed. When you have toxicity/bad behavior, you lose users and creators need users on their channels. Moderation/trust & safety as good business. Community guidelines overlay the TOS, indicating expectations.  Tools for user reporting, processing, Audible Magic filtering for music, machine learning for chat filtering. Goal: consistent enforcement.  5 minute SLA for content.  

Gaming focus allowed them to short circuit many policy issues because if it wasn’t gaming content it wasn’t welcome, but that has changed. 2015 launched category “creative,” still defining what was allowed. Over time have opened it further—“IRL” which can be almost anything.  Early guidelines used a lot of gaming language; had to change that.  All reported incidents are reviewed by human monitors—need to know gaming history and lingo, how video and chat are interacting, etc.  Moderators come from the community. Creators often monitor/appoint moderators for their own channels, which reduces what Twitch staff has to deal with. Automated detection, spam autodetection, auto-mod—creator can choose level of auto-moderation for their channel. 

Sean McGillivray, Vimeo: largest ad-free open video platform, 70 million users in 150 countries.  A place for intentional videos, not accidental (though they’ll take those too).  No porn.  [Now I really want to hear from a Tube site operator about how it does content moderation.]  Wants to avoid being blocked in any jurisdiction while respecting free speech.  5 person team (about half legal background, half community moderation background) + developer, working w/others including community support, machine learning.  We get some notices about extremist content, some demands from censorship bodies around the world. We have algorithmic detection of everything from keywords to user behavior (velocity from signupàaction).  Some auto-mod for easy things like spam and rips of TV shows. Some proactive investigation, though the balance tips in favor of user flagging. We may use that as a springboard depending on the type of content. Find every account that interacted w/ a piece of content to take down networks of related accounts—for child porn, extremist content.  We can scrub through footage pretty quickly for many things. 

There are definitely edge cases/outliers/oddballs, which is usually what drives a decision to update/add new policy/tweak existing policy.  When new policy has to be made it can go to the top, including “O.G. Vimeans”—people who’ve been w/the community from the beginning.  If there’s disagreement it can escalate, but usually if you kill it, you clean it: if user appeals/complains, you explain.  If you can’t explain why you took it down, you probably shouldn’t have taken it down.  There’s remediation—if we think an account can be saved, if they show willingness to change behavior or explain how they misunderstood the guidelines, there’s no reason not to reverse a decision. We’re not parents and we don’t say “because I said so.”

Challenges: we do allow nudity and some sexual content, as long as it serves an artistic, narrative or documentary purpose. We have always been that way, and so we have to know it when we see it. He might go for something more binary, but that’s where we are. We make a lot of decisions based on internal and external guidelines that can appear subjective (our nipple appearance/timing index).  Scale is an issue; we aren’t as large as some, but we’re large and growing with a small team.

We may need help w/language & context—how do you tell if a rant to the camera is a Nazi rant if you can’t speak the language?

Bots never sleep, but we do.

Being ad free: we don’t have a path to monetization.  We comply w/DMCA. No ad-sharing agreement we can enter into w/them.  Related: we have pro userbase.  Almost 50% of user are some form of pro filmmaker, editor, videographer. They can be very temperamental. Their understanding of © and privacy may require a lot of handholding.  It’s more of a platform to just share work. We do have a very positive community that has always been focused on sharing and critique in a positive environment.  That has limited our commitment to free speech—we remove abusive comments/user-to-user interaction/harassing videos.  We also have an advantage of just dealing w/videos, not all the different types of speech, w/a bit of comments/discussion.  Users spend a lot of time monitoring/flagging and we listen to them.  We weight some of the more successful flaggers so their flags bubble up to review more quickly.

Goldman: what’s not working as well as you’d like?

Foley: how much can we automate w/o risking quality? We don’t have unlimited resources so we need to figure out where we can make compromises, reduce risk in automation.

McGillivray: you’re looking to do more w/less.

Keen: Similar. Need to build things as quickly as possible.

Harvey: Transparency around actions we take, why we take those actions. Twitter has a significant amount of work planned in that space.  Relatedly, continuing to share best practices across industry & make sure that people know who to reach out to if they’re new in this space.

Burton: Keep in mind that we’re engaged in automation arms race w/spambots, fake followers, highly automated adversaries. Have to keep human/automated review balanced to be competitive.

Under the Hood: UGC Moderation (Part 2)
Tal Niv, Github: Policy depends a lot on content hosted, users, etc. Github = world’s largest software development platform. The heart of Github is source control/version system, allowing many users to coordinate on files with tracked changes. Useful for collaboration on many different types of content, though mostly software development.  27 million users worldwide, including individuals & companies, NGOs, gov’t.  85 million repositories. Natural community.

Takedowns must be narrow.  Software involves contribution of many people over time; often a full project will be identified for takedown, but when we look, we see it’s sometimes just a file, a few lines of code, or a comment.  15 people out of 800 work on relevant issues, e.g., support subteam for TOS support, made of software programmers, who receive initial intake of takedowns/complaints.  User-facing policies are all open on the site, CC-licensed, and open to comment.  Legal team is the maintainer & engages w/user contributions.  Users can open forks. Users can also open issues.  Legal team will respond/engage.  List of repositories as to which a takedown has been upheld: Constantly updated in near real time, so no waiting for a yearly transparency report.

Nora Puckett, Google
Legal removals (takedowns) v. content policies (what we don’t want): hate speech, harassment; scaled issues like spam and malware.  User flags are important signals. Where request is sufficiently specific, we do local removals for violation of local law (general removals for © and child exploitation).  Questions we prompt takedown senders to answer in our form help you understand what our removal policies are.  YT hosts content and has trusted flaggers who can be 90% effective in flagging certain content.  In Q4 2017, removed 8.2 million videos violating community guidelines, found via automation as well as flags and trusted flags.  6.5 million were flagged by automated means; 1.1 million by trusted users; 400,000 by regular users.  We got 20 million flags during the same period  [?? Does she mean DMCA notices, or flags of content that was actually ok?].  We use these for machine learning: we have human reviewers verifying automated flags are accurate and use that to train machine learning algorithms so content can be removed as quickly as possible. 75% of automatically flagged videos are taken down before a single view; can get extremist videos down in 8 hours and half in less than 2 hours. Since 2014, 2.5 million URL requests under RTBF and removed over 940,000 URLs since then. In 2018, 10,000 people working on content policies and legal removal.

Best practices: Transparency. We publish a lot of info about help center, TOS, policies w/ exemplars.

Jacob Rogers, Wikimedia Foundation: Free access to knowledge, but while preserving user privacy; self-governing community allowing users to make their own decisions as much as possible. Where there are clear rules requiring removal, we do so. Sometimes take action in particularly problematic situations, e.g. where someone is especially technically adept at disrupting the site/evading user actions. Biannual transparency report. No automated tools but tools to rate content & draw volunteers’ attention to it.  E.g., will rate quality of edits to articles.  70-90% accurate depending on the type of content. User interaction timeline: can identify users’ interactions across Wikipedia and determine if there’s harassment going on.  Relatively informal b/c of relatively small # of requests. Users handle the lion’s share of the work. Foundation gets 300-500 content requests per year.  More restrictive than many other communities—many languages don’t accept fair use images at all, though they could have them.  Some removals trigger the Streisand effect—more attention than if you’d left it alone.

Peter Stern, Facebook: Community standards are at core of content moderation.  Cover full range of policies, from bullying to terrorism to authentic ID and many other areas. Stakeholder engagement: reaching out to people w/an interest in policies.  Language is a big issue—looking to fill many slots w/languages.  Full-time and outsourced reviewers.  Automation deals w/spam and flags for human review and prioritizes certain types of reports/gets them to people w/relevant language/expertise. Humans play a special role b/c of their ability to understand context.  Training tries to get them to be as rigid as possible and not interpret as they go; try to break things down to a very detailed level tracking the substance of the guidelines, now available on the web.  It only takes one report for a policy violation to be removed; multiple reports don’t increase the likelihood of removal, and after a certain point automation shuts off the review so we don’t have 1000 people reviewing the same piece of content that’s been deemed ok. Millions of reports/week, usually reviewed w/in 24 hours. Route issues of safety & terrorism more quickly into the queue.

Most messaging explains the nature of the violation to users.  Appeals process is new—will discuss on Transparency panel. 

Resiliency training is also part of the intake—counseling available to all reviewers; require that for all our vendors who provide reviewers. Do audits for consistency; if reviewers are having difficulty, then we may need to rewrite the policy.

Community integrity creates tools for operations to tools, e.g. spotting certain types of images.

Strategic response team. E.g., there’s an active shooter.  Would have to decide whether he’s a terrorist, which would change the way they’d have to treat speech praising him. Would scan for impersonation accounts.

Q: how is content moderation incorporated into product development pipeline?

Niv: input from content moderation team—what tools will they need?

Puckett: either how current policies apply or whether we need to revise/refine existing policies—a crucial part.

Rogers: similar, review w/legal team. Our product development is entirely public; the community is very vocal about content policy and will tell us if they worry about spam/low quality content or other impediments to moderation.

Stern: Similar: we do our best to think through how a product might be abused and that we can enforce existing policies. Create new if needed.

No comments: