Facebook’s new policy Supreme Court could override Zuckerberg
A real check to Facebook CEO Mark Zuckerberg’s control is finally coming in the form of an 11 to 40-member Oversight Board that will review appeals to its policy decisions, like content takedowns, and make recommendations for changes. Today Facebook released the charter establishing the theoretically independent Oversight Board, with Zuckerberg explaining that when it takes a stance, “The board’s decision will be binding, even if I or anyone at Facebook disagrees with it.”
Slated to be staffed with members this year, who will be paid by a Facebook-established trust (the biggest update to its January draft charter), the Oversight Board will begin judging cases in the first half of 2020. Given Zuckerberg’s overwhelming voting control of the company, and the fact that its board of directors contains many loyalists, like COO Sheryl Sandberg and investor Peter Thiel whom he’s made very rich, the Oversight Board could ensure the CEO doesn’t always have the final say in how Facebook works.
But in some ways, the committee could serve to shield Zuckerberg and Facebook from scrutiny and regulation, much to their advantage. The Oversight Board could remove total culpability for policy blunders around censorship or political bias from Facebook’s executives. It also might serve as a talking point toward the FTC and other regulators investigating it for potential antitrust violations and other malpractice, as the company could claim the Oversight Board means it’s not completely free to pursue profit over what’s fair for society.
Finally, there remain serious concerns about how the Oversight Board is selected and the wiggle room the charter provides Facebook. Most glaringly, Facebook itself will choose the initial members and then work with them to select the rest of the board, and thereby could avoid adding overly incendiary figures. And it maintains that “Facebook will support the board to the extent that requests are technically and operationally feasible and consistent with a reasonable allocation of Facebook’s resources,” giving it the right to decide if it should apply the precedent of Oversight Board verdicts to similar cases or broadly implement its policy guidance.
How the Oversight Board works
When a user disagrees with how Facebook enforces its policies, and with the result of an appeal to Facebook’s internal moderation team, they can request an appeal to the Oversight Board. Examples of potential cases include someone disagreeing with Facebook’s refusal to deem a piece of content as unacceptable hate speech or bullying, its choice to designate a Page as promoting terrorism and remove it or the company’s decision to leave up problematic content, such as nudity, because it’s newsworthy. Facebook also can directly ask the Oversight Board to review policy decisions or specific cases, especially urgent ones with real-world consequences.
After Zuckerberg initially laid out a blueprint for the Oversight Board a year ago, Facebook assigned a 100-person team to build out the plan for the board. It held six workshops and 22 round-tables, plus case-review simulations with 650 people from 88 countries.
The board will include a minimum of 11 members, but Facebook is aiming for 40. They’ll serve three-year terms and a maximum of three terms each as a part-time job, with appointments staggered so there isn’t a full change-over at any time. Facebook is looking for members with a broad range of knowledge, competencies and expertise who lack conflicts of interest. They’re meant to be “experienced at deliberating thoughtfully and collegially,” “skilled at making and explaining decisions based on a set of policies,” “well-versed on matters relating to digital content and governance” and “independent and impartial.”
Facebook will appoint a set of trustees that will work with it to select initial co-chairs for the board, who will then assist with sourcing, vetting, interviewing and orienting new members. The goal is “broad diversity of geographic, gender, political, social and religious representation.” The trust, funded by Facebook with an as yet undecided amount of capital, will set members’ compensation rate in the near future and oversee term renewals.
Inevitable calls of biased board members
My biggest worry here is how Facebook will handle the fact that it’s trying to represent an extraordinarily vast set of global policy perspectives…broader than any one country’s laws. What’s taboo or even illegal in one nation may be common or lauded in another. Facebook may see endless challenges from different segments of the public regarding the previous public statements by board members.
What Facebook’s own staff in California might see as an uncontroversial viewpoint could trigger calls for removal from the board elsewhere. We’ve seen how common “cancelled” culture has become when the public digs up problematic content from celebrities or politicians, and that’s just based on what flies in the United States.
For example, Republican senators just bullied Facebook into removing a fact-check that found the statement “abortion is never medically necessary” to be false, allowing that viewpoint to spread uninhibited on the social network. I personally wouldn’t want someone with that viewpoint on the Oversight Board, but others might feel the opposite. And what happens when politicians start demanding more conservative representation on the Oversight Board the same way they’ve badgered Facebook for supposedly censoring them despite evidence to the contrary?
Which cases get reviewed?
The board will choose which cases to review based on their significance and difficulty. They’re looking for issues that are severe, large-scale and important for public discourse, while raising difficult questions about Facebook’s policy or enforcement that is disputed, uncertain or represents tension or trade-offs between Facebook’s recently codified values of authenticity, safety, privacy and dignity. The board will then create a sub-panel of five members to review a specific case.
The board will be able to question the request that Facebook provide information necessary to rule on the case with a mind to not violating user privacy. They’ll interpret Facebook’s Community Standards and policies and then decide whether Facebook should remove or restore a piece of content and whether it should change how that content was designated. Verdicts are meant to have consensus, but will be approved by majority when necessary.
How decisions get made
Once a panel makes a draft decision, it’s circulated to the full board, which can recommend a new panel review if a majority take issue with the verdict. Once they’ve gone through a privacy review to protect the identities of those involved with the case, the decisions will be made public within two weeks and affected users will be notified. Those decisions will be archived in a database, and are meant to act as precedent for future decisions. The idea is that the decisions of the board will be binding and implemented by Facebook as long as they don’t require it to violate the law.
But will Facebook really implement them?
The biggest concern with the charter is that it still provides Facebook some leeway about how to implement the board’s decisions. Critically, it only has to apply the decision to the specific case reviewed, and it’s at the company’s discretion to turn that into blanket policy:
In instances where Facebook identifies that identical content with parallel context — which the board has already decided upon — remains on Facebook, it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook . . . Facebook will support the board to the extent that requests are technically and operationally feasible and consistent with a reasonable allocation of Facebook’s resources.
Because of these sections I’ve bolded, Facebook has the ability to decide it would be operationally infeasible to do what the board decided in every situation, merely take the guidance into account for future policy-making and choose whether implementation is a reasonable allocation of capital and staff. This provides a sizable gray area.
If Facebook chooses that the board’s decision could materially reduce sharing even if it protected users, it might consider that operationally infeasible. If it would cost too much to moderate content in the way the board recommends, it could deem that unreasonable resource allocation. And if the policy guidance doesn’t mesh with its other objectives, it only has to “consider” the board’s wishes.
This section is where advocates and critics should focus. These exemptions to implementation need to be made less vague if the structure is truly going to hold Facebook accountable. If Facebook just declines to broadly change its policy to fit the board’s recommendation, all the board can do is make binding decisions on specific cases.
Facebook director of governance Brent Harris explained on a call with reporters that “If the board doesn’t feel like we’ve handled it right, they’ll keep taking cases and overturn us.” But again the board’s power is focused on a case-by-case basis. Facebook still controls the wide-reaching changes to policy.
If you want to learn more about solutions to Facebook’s concentration of power, check out my talk at SXSW 2020 with Facebook’s co-founder Chris Hughes, who has called for the company to be broken up.
Source: TechCrunch