The Artificial Intelligentsia
An extract originally published at thebaffler.com on March 4, 2018 by Aaron Timms
Wieden + Kennedy: Nicole Jacek, Curtis Pachunka, Chase Farthing
Picking Silicon Valley’s future winners is still, some six decades since the birth of modern venture capital, more art than science. Most VC-backed startups don’t make it past the first round of funding; a good number of the AI companies financed this year will be dead by the fall. The story of Silicon Valley is as much about donkeys as unicorns, entrepretendeurs as entrepreneurs. Like all good stories, this story has the capacity to surprise. Many of the tech industry’s most memorable flops were at one point seen as great successes. Juice machine startup Juicero attracted $134 million in venture capital funding before a story by Bloomberg mocking its “juice pack” technology sent the company crashing; blood testing startup Theranos, once valued at $9 billion, is now worth less than 10 percent of that figure and has only dodged bankruptcy thanks to an emergency loan of $100 million. Thousands of tech ventures founded this year will meet a similar end to these high-thrust flameouts but will avoid the scrutiny: no media reports, no dragging tweets, no trial by meme. Failure, when it comes, will be quiet and anonymous. This part of Silicon Valley’s story remains little told.
…
Wieden + Kennedy: Nicole Jacek, Curtis Pachunka, Chase Farthing
Future Shtick
Global venture capital funding for artificial intelligence startups has increased more than twenty-five times over since 2012. In 2017 it reached $15.2 billion, according to research firm CB Insights, with half of that money flowing to startups in the United States. This represents an extraordinary comeback for a technology that, by the early 2000s, was something of a museum piece. The original ambition of AI was to build machines that could replicate the intelligence of a human being. After promising early developments in the 1960s and 1970s, the field stagnated through the final decades of the twentieth century — a period known as the “AI winter” — as developers struggled to realize that ambition. Improvements in computer processing power over the last decade, however, have brought the original vision back to life.
A computer system that can predict the future fits seamlessly within AI’s new hosanna narrative…
Among the Romantic Egotists
“The test of a first-rate intelligence,” F. Scott Fitzgerald said, “is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.” Fitzgerald embodied a certain idea of the prodigy in early twentieth-century America. His semi-autobiographical first novel, This Side of Paradise (published when he was twenty-three), which told the story of a young writer’s education at Princeton, brought him sudden, near-universal acclaim. The rest of his career, even through the Gatsby years, became a long and increasingly frustrated attempt to rekindle that first precocious blaze of success. Today our image of the Jazz Age — as an era of giddy, libidinous self-enrichment and cultural exploration — is inseparable from the figure of Fitzgerald, the brilliant young man who lived, if only briefly, his generation’s most interesting life.
From his correspondence it’s clear that Fitzgerald intended his definition of intelligence as little more than a self-description. Shot through the letters he sent as a young man to family, friends, editors, and agents are equal measures of self-doubt and self-regard; he saw himself as both a generational talent and a cultural waste of space, often within the same page. One moment he’s lamenting his “flabby semi-intellectual softness,” the next he’s saying of his (rejected) early manuscript, The Romantic Egotist, “no one else could have written so searchingly the story of the youth of our generation.” Chronic equivocation — on the value of a Princeton education, on the politics of selling out, on the meaning of success, on the quality of his own work — was the distinguishing mark of Fitzgerald’s intelligence.
These days, literary prodigies are fairly rare; the English-speaking world has produced few so far this century. It’s instead to the world of technology that we must turn for the richest examples of what it means to be young, brilliant, and successful today. Jeff Bezos, Mark Zuckerberg, Larry Page, and Sergey Brin . . . Silicon Valley is an empire of aging prodigies. By the power of their example these demiurges have come to dominate our collective sense of what it takes to be smart: a mastery of numbers, proficiency in STEM, the subordination of empathy to data. Intelligence today is their type of intelligence: tech-telligence. But where was the intelligence when Zuckerberg — or Zuckerberg’s button-bright cartoon avatar — took to Facebook Live late last year and introduced his company’s new VR tool against a backdrop of hurricane devastation in Puerto Rico? What kind of intelligence guided Marc Andreessen toward the claim that colonization was good for India, or Elon Musk to his bizarre crusade against public transport? The fuss over these snafus was brief and quickly forgotten. In response, Andreessen issued a smiley face-adorned tweet of apology — a classic of the “I’m sorry if you were offended” genre. Musk weakly fought back. Zuckerberg said virtually nothing. Not one of them recognized, in public at least, that what he’d done was not simply insensitive and regrettable but also, and above all, supremely idiotic.
From the evidence of these examples, all three men would fail Fitzgerald’s test of intelligence. “Any city gets what it admires, will pay for, and, ultimately, deserves,” the New York Times editorialized in October 1963, as demolition of the old Penn Station began. “We want and deserve tin-can architecture in a tinhorn culture.” Perhaps the intelligence of Silicon Valley — vacant, arrogant, unfeeling, artificial — is simply the intelligence we deserve. But that intelligence cannot flourish without enablers.
Wieden + Kennedy: Nicole Jacek, Curtis Pachunka, Chase Farthing
Bad Brains
AI does not want for critics. In Silicon Valley, the big cats have drawn their claws. Elon Musk is one of several tech luminaries, including Jack Ma and Bill Gates, who believe AI poses a mortal danger to human civilization. Last year Musk compared the work of building AI to “summoning the demon.” Mark Zuckerberg labeled Musk’s intervention in the AI debate irresponsible. Musk shot back with a subtweet. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”
When Musk speaks of AI, he’s mostly referring to the technology as it was originally conceived in the 1950s — as a system of symbolic logic to enable the creation of self-aware machines with similar cognitive sophistication to the human brain. This is what’s commonly referred to as “artificial general intelligence” or “strong AI.” In practice, there are no technologies or companies grouped under the rubric “AI” today that meet this description. The term “artificial intelligence” is instead used loosely to refer to a diverse group of less ambitious technologies, some of which have little in common: machine learning, deep learning and neural networks, robotics.
If the ambition of the field is to model the human brain in machine form, artificial general intelligence has made little progress in the six decades since it emerged. The scope of what can be done in AI as we understand it today — the looser, lesser AI — remains limited. Machine learning, the logic- and rule-based branch of AI supporting Predata, is little more than a technology to process data and program reactions to recognized patterns. Some argue that it should not be considered part of AI at all.
Even the most eye-catching successes claimed for AI in recent times have been, on closer inspection, relatively underwhelming. The idea that an autonomous superhuman machine intelligence will spontaneously spring, unprogrammed, from these technologies is still the stuff of Kurzweilian fantasy. Forget Skynet; at this stage it’s not certain we’ll even get to Bicentennial Man.
The failure to make any progress toward the development of strong AI, physicist David Deutsch has argued, stems from the AI community’s broader inability to “recognize that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be.” Human intelligence, says Deutsch, cannot be encoded by any known programming technique, yet AI developers continue to approach the problem of AI as if it can. The human mind is not a behavioristic function of inputs and outputs that can be optimized according to a defined system of logic; nor is it a neural network of intelligent, self-correcting connections. These techniques might replicate discrete functions of a human mind, but they cannot capture the mind’s totality or what makes it unique: its creativity, its genius for emotion and intuition. There’s something else going on. What that something else is, we don’t yet know.
AI researchers dismiss Deutsch as an outsider with no understanding of how the technology really works — a rote rebuttal in data-engineering circles. But his basic point is correct. The field of AI continues to limp along with no real understanding of what makes the human brain unique and with no agreed definition of “intelligence.” If ideal AI is “strong,” ours is the age of weak AI.
As a result, intelligence today is defined not by the properties of the human brain but by association. Intelligence is the thing intelligent people do. Since intelligence remains undefined in AI, the whole field is arguably misconceived, for now at least. A machine built to model an organ we don’t yet understand is bound to fail. More than a pedantic definitional point, this goes to the heart of how the VC billions get allocated in this new boom sector. Faced with the impossibility of determining whether a technology is intelligent or not — since we don’t know what intelligence is — Silicon Valley’s funders are left instead to judge the merit of a new idea in AI according to the perceived intelligence of its developers. What did they study? Where did they go to school? These are the questions that matter.