24Business

Meta exempts top advertisers from the standard content moderation process


Meta exempted some of its top advertisers from its normal content moderation process, protecting its multibillion-dollar business amid internal concerns that the company’s systems were mistakenly penalizing top brands.

According to internal documents from 2023 seen by the Financial Times, the owner of Facebook and Instagram introduced a series of “fences” that “protect big spenders”.

Previously unreleased memos said as much Target would “prevent detection” based on how much an advertiser has spent on the platform, and some top advertisers would be reviewed by humans instead.

One document suggested that a group called “P95 consumers” — those who spend more than $1,500 a day — were “exempt from advertising restrictions” but would still “eventually be sent for manual human review.”

The reminder preceded this week’s announcement CEO Mark Zuckerberg that Meta is ending its third-party fact-checking program and reducing automated content moderation, as it prepares for Donald Trump’s return to the presidency.

The 2023 documents show that Meta discovered that its automated systems incorrectly flagged some of the highest-spending accounts as violating company policies.

The company told the FT that higher-spending accounts were disproportionately vulnerable to false notifications of potential breaches. She did not answer questions about whether any of the measures in the documents are temporary or ongoing.

Ryan Daniels, a spokesman for Meta, said the FT’s report was “simply incorrect” and “based on a careful reading of documents that make clear that the goal of this effort was to address something we’ve been very public about: preventing enforcement errors.”

Advertising accounts for the majority of Meta’s annual revenue, which in 2023 was nearly $135 billion.

The tech giant typically reviews ads using a combination of artificial intelligence and human moderators to stop violations of its standards, in an effort to remove material such as fraud or harmful content.

In a document titled “preventing high-spending mistakes,” Meta said it has seven guardrails that protect business accounts that generate more than $1,200 in revenue in a 56-day period, as well as individual users who spend more than $960 on advertising during of the same period. period.

It said the guardrails help the company “decide whether disclosure should continue to be implemented” and are designed to “suppress disclosures . . . based on characteristics, such as the level of advertising spend.”

As an example, a company that is “in the top 5 percent of revenue” is given.

Meta told the FT that it uses “higher spend” as a hedge because it often means a company’s ads will have a greater reach, so the consequences could be more severe if a company or its ads are mistakenly removed.

The company also admitted that it prevented some high-spending accounts from being disabled by its automated systems, rather than sending them for human review, when the company was concerned about the accuracy of its systems.

However, it states that all companies are still subject to the same advertising standards and that no advertiser is exempt from its rules.

In a memo on “preventing high-spend errors,” the company rated different categories of guardrails as “low,” “medium,” or “high” in terms of whether they were “defensible.”

Meta staff labeled the practice of using consumption hedges as “low” defensible.

Other safeguards, such as using knowledge of business reliability to help decide whether to automatically act on breach detection, are labeled “high” defensiveness.

Meta said the term “defensible” refers to the difficulty in explaining the concept of hedges to stakeholders, if they are misinterpreted.

The 2023 filings do not name the big spenders who were within the company’s scope, but the spending thresholds suggest that thousands of advertisers were considered exempt from the typical moderation process.

Estimates from market research firm Sensor Tower suggest that the top 10 US spenders on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.

Meta has posted record revenues in recent quarters, and its stock is trading at an all-time high, following the company’s recovery from a post-pandemic slump in the global advertising market.

But Zuckerberg has warned of threats to his business, from the rise of AI to ByteDance-owned rival TikTok, which has become increasingly popular among younger users.

A person familiar with the documents claimed that the company “prioritises revenue and profits over the integrity and health of users,” adding that concerns have been raised internally about bypassing the standard moderation process.

Zuckerberg said on Tuesday that the complexity of Meta’s content moderation system led to “too many mistakes and too much censorship.”

His comments came after Trump last year accused Meta of censoring conservative speech and suggested that if the company meddled in the 2024 election, Zuckerberg would “spend the rest of his life in prison.”

Internal documents also show that Meta considered seeking other exemptions for certain top-spending advertisers.

In one memo, Meta’s staff proposed “more aggressively providing protections” against over-moderation to what it calls “platinum and gold consumers,” who together bring in more than half of advertising revenue.

“False positive integrity enforcement against high-value advertisers costs Meta revenue [and] erodes our credibility,” the letter reads.

He suggested the option of a general exemption for those advertisers from certain measures, except in “very rare cases”.

The memo shows that staff concluded platinum and gold advertisers were “not an appropriate segment” for a broad exemption, as an estimated 73 percent of its measures were justified, according to the company’s tests.

Internal documents also show that Meta discovered multiple AI-generated accounts within the big spender categories.

Meta has previously come under scrutiny for cutting exemptions for important users. In 2021, the Facebook whistleblower Frances Haugen leaked documents show the company had an internal system called “cross-check”, designed to review content from politicians, celebrities and journalists to ensure posts were not mistakenly removed.

According to Haugen’s documents, this was sometimes used to protect some users from coercion, even if they violated Facebook’s policies, a practice known as “whitelisting.”

Meta’s oversight board — an independent “Supreme Court”-style body funded by the company to oversee its toughest moderation decisions — found that the cross-checking system left dangerous content online. He demanded a review of the system, which Meta undertook in the meantime.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button