Without Humans, Artificial Intelligence Is Still Pretty Stupid

If you want to understand the limitations of the algorithms that control what we see and hear—and base many of our decisions upon—take a look at Facebook Inc.’s FB 0.17{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117}experimental remedy for revenge porn.

To stop an ex from sharing nude pictures of you, you have to share nudes with Facebook itself. Not uncomfortable enough? Facebook also says a real live human will have to check them out.

Without that human review, it would be too easy to exploit Facebook’s antirevenge-porn service to take down legitimate images. Artificial intelligence, it turns out, has a hard time telling the difference between your naked body and a nude by Titian.

The internet giants that tout their AI bona fides have tried to make their algorithms as human-free as possible, and that’s been a problem. It has become increasingly apparent over the past year that building systems without humans “in the loop”—especially in the case of Facebook and the ads it linked to 470 “inauthentic” Russian-backed accounts—can lead to disastrous outcomes, as actual human brains figure out how to exploit them.

Whether it’s winning at games like Go or keeping watch for Russian influence operations, the best AI-powered systems require humans to play an active role in their creation, tending and operation. Far from displacing workers, this combination is spawning new nonengineer jobs every day, and the preponderance of evidence suggests the boom will continue for the foreseeable future.

A worker for Amazon’s Mechanical Turk transcribes a phone conversation of an insurance claim.
A worker for Amazon’s Mechanical Turk transcribes a phone conversation of an insurance claim. PHOTO: DAI SUGANO/SAN JOSE MERCURY NEWS/TNS/ZUMA PRESS

Facebook, of course, is now a prime example of this trend. The company recently announced it would add 10,000 content moderators to the 10,000 it already employs—a hiring surge that will impact its future profitability, said Chief Executive Mark Zuckerberg.

And Facebook is hardly alone. Alphabet Inc.’s Google has long employed humans alongside AI to eliminate ads that violate its terms of service, ferret out fake news and take down extremist YouTube videos. Google doesn’t disclose how many people are looped into its content moderation, search optimization and other algorithms, but a company spokeswoman says the figure is in the thousands—and growing.

Twitter has its own teams to moderate content, though the company is largely silent about how it accomplishes this, other than touting its system’s ability to automatically delete 95{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} of terrorists’ accounts.

Almost every big company using AI to automate processes has a need for humans as a part of that AI, says Panos Ipeirotis, a professor at New York University’s Stern School of Business. America’s five largest financial institutions employ teams of nonengineers as part of their AI systems, says Dr. Ipeirotis, who consults with banks.

AI’s constant hunger for human brains is based on our increasing demand for services. The more we ask for, the less likely a computer algorithm can go it alone—while the combination can be more effective and efficient. For example, bank workers who previously read every email in search of fraud now make better use of their time investigating emails the AI flags as suspicious, says Dr. Ipeirotis.

Content moderators in Manila working for Open Source, a U.S. outsourcing tech company.
Content moderators in Manila working for Open Source, a U.S. outsourcing tech company. PHOTO: MOISES SAMAN/MAGNUM PHOTOS

What AI Can (and Can’t) Do

A machine-learning-based AI system is a piece of software that learns, almost like a primitive insect. That means that it can’t be programmed—it must be taught.

To teach them, humans feed these systems examples, and they need truckloads. To build an AI filter to identify extremist content on YouTube, humans at Google manually reviewed over a million videos to flag qualifying examples, says a Google spokeswoman.

An algorithm can only be as good as “the quantity and quality of the training data to get [it] going,” says Robin Bordoli, CEO of CrowdFlower Inc., which provides human labor to companies that need people to train and maintain AI algorithms, from auto makers to internet giants to financial institutions.

Even when an AI has been trained, its judgment is never perfect. Human oversight is still needed, especially with material in which context matters, such as those extremist YouTube posts. While AI can take down 83{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} before a single human flags them, says Google, the remaining 17{154653b9ea5f83bbbf00f55de12e21cba2da5b4b158a426ee0e27ae0c1b44117} needs humans. But this serves as further training: This data can then be fed back into the algorithm to improve it.

Read more at www.wsj.com

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via