Sentient Noosphere
   Tiernan Morgan & Lauren Purje: August 10, 2016
The spectacle can be found on every screen that you look at. It is the advertisements plastered on the subway and the pop-up ads that appear in your browser.

Guy Debord’s (1931–1994) best-known work, La société du spectacle (The Society of the Spectacle) (1967), is a polemical and prescient indictment of our image-saturated consumer culture. The book examines the “Spectacle,” Debord’s term for the everyday manifestation of capitalist-driven phenomena; advertising, television, film, and celebrity.

Debord defines the spectacle as the “autocratic reign of the market economy.” Though the term “mass media” is often used to describe the spectacle’s form, Debord derides its neutrality. “Rather than talk of the spectacle, people often prefer to use the term ‘media,’” he writes, “and by this they mean to describe a mere instrument, a kind of public service.” Instead, Debord describes the spectacle as capitalism’s instrument for distracting and pacifying the masses. The spectacle takes on many more forms today than it did during Debord’s lifetime. It can be found on every screen that you look at. It is the advertisements plastered on the subway and the pop-up ads that appear in your browser. It is the listicle telling you “10 things you need to know about ‘x.’” The spectacle reduces reality to an endless supply of commodifiable fragments, while encouraging us to focus on appearances. For Debord, this constituted an unacceptable “degradation” of our lives.

Debord was a founding member of the Situationist International (1957–1972), a group of avant-garde artists and political theorists united by their opposition to advanced capitalism. At varying points the group’s members included the writers Raoul Vaneigem and Michèle Bernstein, the artist Asger Jorn, and the art historian T.J. Clark. Inspired primarily by Dadaism, Surrealism, and Marxist philosophy, the SI rose to public prominence during the May 1968 demonstrations during which members of the group participated in student-led occupations and protests. Though the extent of its influence is disputed, there is little doubt that the SI played an active intellectual role during the year’s events. Graffiti daubed around Paris paraphrased the SI’s ideas and in some cases directly quoted from texts such as The Society of the Spectacle and Raoul Vaneigem’s The Revolution of Everyday Life (1967).

The first English translation of Debord’s text was published in 1970 by Black and Red Books. The book’s cover features J.R. Eyerman’s iconic photograph of the premiere of Bwana Devil (1952), the first 3D color film. Originally reproduced in LIFE magazine, the image captures the film’s audience gazing passively at the screen with the use of anaglyph glasses. In the foreground, a besuited, heavy-set gentleman watches the screen intently, his mouth agape. Eyerman’s photograph reduces the audience members to uniform rows of spectacled spectators. Although the image encapsulates Debord’s contempt for consumer culture, it reductively implies that his work was mediaphobic (Debord later adapted The Society of the Spectacle into his first feature-length film by utilizing footage from advertisements, newsreels, and other movies). If we were to judge The Society of the Spectacle by Black and Red’s cover, we might assume that the book is a straightforward critique of media-driven conformity. Debord’s insights however, were far more profound.

The Society of the Spectacle consists of 221 short theses divided across nine chapters. The first thesis reworks the opening line of Karl Marx’s Das Capital (1867):

Marx: The wealth of societies in which the capitalist mode of production prevails presents itself as an immense accumulation of commodities.
Debord: In societies where modern conditions of production prevail, all of life presents itself as an immense accumulation of spectacles. Everything that was directly lived has moved away into representation.

By paraphrasing Marx, Debord immediately establishes a connection between the spectacle and the economy. The book essentially reworks the Marxist concepts of commodity fetishism and alienation for the film, advertising, and television age.


“I think right now everybody is already perceiving that this is the decade of artificial intelligence. And there is nothing like artificial intelligence that drives the digitization of the world. Historically, artificial intelligence has always been the pioneer battallion of computer science.”

When something was new and untested it was done in the field of artificial intelligence, because it was seen as something that requires intelligence in some way, a new way of modeling things. intelligence can be understood to a very large degree as the ability to model new systems, to model new problems.

And so it's natural that even narrow artificial intelligence is about making models of the world. For instance, our current generation of deep-learning systems are already modeling things. They're not modeling things quite in the same way with the same power as human minds can do it—they're mostly classifiers, not simulators of complete worlds. But they're slowly getting there, and by making these models we are, of course, digitizing things. We are making things accessible in data domains. We are making these models accesible to each other by computers and by artificial intelligence systems.

And artificial intelligence systems provide extensions to all our minds. Already now, Google is something like my exo-cortex. It's something that allows me to act as vast resources of information that get in the way I think and extend my abilities. If I forget how to use a certain command in a programming language, it's there at my fingertips, and I entirely rely on this like every other programmre on this planet. This is something that is incredibly powerful, and was not possible when we started out programming, when we had to store everything in our own brains.

I think consciousness is a very difficult concept to understand because we mostly know it by reference. We can point at it, but it's very hard for us to understand what it actually is. And I think at this point the best model that I've come up with—what we mean by consciousness—it is a model of a model of a model.

That is: our new cortex makes a model of our interactions with the environment. And part of our new cortex makes a model of that model, that is, it tries to find out how we interact with the environment so we can take this into account when we interact with the environment. And then you have a model of this model of our model which means we have something that represents the features of that model, and we call this the Self.

And the Self is integrated with something like an intentional protocol. So we have a model of the things that we attended to, the things that we became aware of: why we process things and why we interact with the environment. And this protocol, this memory of what we attended to is what we typically associate with consciousness. So in some sense we are not conscious in acutality in the here and now, because that's not really possible for a process that needs to do many things over time in order to retrieve items from memory and process them and do something with them.

Consicousness is actually a memory. It's a construct that is reinvented in our brain several times a minute. When we think about being conscious of something it means that we have a model of that thing that makes it operable, that we can use. You are not really aware of what the world is like. The world out there is some weird [viewed?] quantum graph. It's something that we cannot possibly really understand—first of all because we, as observers, cannot really measure it. We don't have access to the full vector of the universe.


One Day, You Will Completely Forget Yourself

   March 6, 2017

As a rule, people aren't prepared for death; they just vigorously avoid the issue. You feel like Buddhism is preparing you for death. That is your vague thinking (not feeling). If you're not clear on the aim of Buddhism, why not find out instead of going on excursions based on vague opinion?

Preparation for death is a human issue, not just a Buddhist one. The actuality is that nearly all humans live as if in a dream. We think and act as if we will live forever, but that is not at all true. So we are very divorced from reality. We don't live with awareness. Our days are very dreamlike without that perspective, and therefore our days are without the meaning that the perspective brings.

Without knowing in your heart (not just as a distant concept) life's end, you lack awareness of life's beauty and preciousness. You lack awareness of the preciousness of these real days, these common events, these ordinary relationships. Because there is the underlying belief that you will always have access to them.

Without awareness of death, you lack meaning. To the degree that you are aware of death, the meaningfulness of everyday events is present for you.

There was a great scene from The Hurt Locker, where the main character, James, finally comes home from the insane, hellish stress of war and finds himself in a grocery store which seems to be hellish in its bleakness. Although violent and senseless, the imminence of death on the battlefield did bring forth a sense of vitality, of being extremely alive. Our task is to connect with the vitality and meaning brought by awareness of death, without requiring the insanity of pursuing death.

When death does come to us, surprisingly and contrary to all common sense most of us are shocked, aghast, bewildered… as if the most unexpected tragedy had befallen us; even though, since as far back as the prehistoric reach of human memory, the certainty of death has been evident all around us.

But we avoid it, don't we? We shut ourselves away from our tribes, and we hide the dead and dying from sight. Even in our spiritual pursuits, we talk in lofty terms about eternal life, avoiding the more pressing and personal fact of certain death. First know impermanence deeply; then you can reasonably connect with your undying nature.

A great many people die very poorly, in extreme mental torture because throughout their lives they have denied and avoided the fact of their vulnerability and impermanence. People rail and wail and curse the heavens when a family member or loved one dies; and how much more so when they themselves (we) receive a terminal diagnosis or otherwise head toward the end.

Armchair philosophers (you?) keep the issue of death at a distance, as if it is merely a theoretical event. Do you imagine that your death will go as smoothly and comfortably as slipping into a warm bed and falling asleep? It may happen violently. It may happen decades before you expect it to happen. It may happen before you ‘get it all together.’

Your aging may involve great loss or suffering. You may be sick for long periods; you may lose the use of your limbs, or your eyes, or your brain… or all three. You may not be able to digest food for years; or you may not be able to pass urine. Pain and debility may increase in your body for ten, twenty, thirty years—if you are fortunate enough to live long enough. Or you may quickly hurtle toward death, ravaged by cancer or heart disease or some influenza or new viral strain. You may think “If only I could live another year, I would gladly bear the pain,” or you may think “Oh, God, let me die quickly to end the pain.” Or you may have both prayers turning within you at the same time.

So many people and their families are horribly unprepared for death, caught off guard by their helplessness and by the certainty of their end. They can't help themselves, and they can't deal with the reality of their situation. If you actually spend time with those who are dying, you will find that it is actually a bit rare for someone to be able to die with grace and contentment. The rule is generally that the habits built up during life are continued when the pressure rises: more avoidance, more self-distraction, more ominous pressure without knowing what to do with it. More denial, more clamoring for control. Dullness, anxiety, confusion….

And even if a person can muster the courage and presence to be with their situation, in so many cases their family and society won't allow them to die in clarity and dignity. Others around them, caught in the same habitual denial, will often impose small talk and distraction at times when the dying person wants and needs to feel and communicate deeply. Family and friends often bring their noise and busy-mindedness and emotional clinging at a time when the dying person needs space and freedom. And even those who are supposed to be serving the dying—the doctors, nurses, and other health professionals involved—obscure the truth or almost cynically impose bureaucracy and technology as if they are battling a video-game opponent rather than caring for a human being.

If you actually spend time with the dying, this won't be a distant theoretical issue for you—something you can make opinions about from the comfort of your living room. And if you have the illusion that you are already prepared for death, just consider how easily you can let go of things right now: can you give up your opinions freely; can you avoid being triggered by insults; can you be happy in situations that aren't your favorite (like, say, living with the smell of feces and urine on a daily basis)? Is it easy for you to give away money and possessions? Can you gladly live without your favorite food for a year? Are you unmoved by the desire for praise and fame (to be well-thought-of)? Can you give up your favorite activities, no problem? No addictions, no attachments? How openhandedly do you give up friendships or romantic relationships? How harmonious is your mind when you lose your job?

Most of us get upset enough about not getting the last piece of cake! We have a long, long, long way to go to achieve the contentedness that can face death without suffering.

Because all of these and more will have to be given up when you die. So: are you ready for that?

Factor in the shock and terror of it—the dreaded finality—as all the things that you relied on previously are stripped away: no friend or family or loved one can help you; no status can help you; no social or professional standing can help you; no knowledge can help you; no memory can help you; no possession can help you. You lose your strength, your ability to move, your ability to communicate, your power to assimilate food and water, your senses, your brain activity…. Everything you previously depended on will go away. So how will you do?

This is not just Buddhist. Every being faces this.

   Ujwal Chaudhary, Bin Xia, et al.: January 31, 2017
A brain-computer interface records “yes” and “no” answers in patients who lack any voluntary muscle movement.

In 1995, Jean-Dominique Bauby suffered a massive stroke that left him paralyzed and speechless, with only the ability to blink his left eyelid. Using just that eye, he silently dictated his memoir, The Diving Bell and the Butterfly, later adapted into a film.

Bauby suffered from “locked-in syndrome,” in which patients are completely paralyzed except for some eye movement. Some patients eventually lose even the ability to blink, cutting off all contact with the world and raising questions of whether they are still fully conscious and, if so, whether they still wish to live.

Now researchers in Europe say they’ve found out the answer after using a brain-computer interface to communicate with four people completely locked in after losing all voluntary movement due to Lou Gehrig’s disease, or amyotrophic lateral sclerosis.

In response to the statement “I love to live” three of the four replied yes. They also said yes when asked “Are you happy?” The fourth patient, a 23-year-old woman, wasn’t asked open-ended questions because her parents feared she was in a fragile emotional state.

Designed by neuroscientist Niels Birbaumer, now at the Wyss Center for Bio and Neuroengineering in Geneva, the brain-computer interface fits on a person’s head like a swimming cap and measures changes in electrical waves emanating from the brain and also blood flow using a technique known as near-infrared spectroscopy.

To verify the four could communicate, Birbaumer’s team asked patients, over the course of about 10 days of testing, to respond yes or no to statements such as “You were born in Berlin” or “Paris is the capital of Germany” by modulating their thoughts and altering the blood-flow pattern. The answers relayed through the system were consistent about 70 percent of the time, substantially better than chance.


What price do we pay for civilization? For Walter Scheidel, a professor of history and classics at Stanford, civilization has come at the cost of glaring economic inequality since the Stone Age. The sole exception, in his account, is widespread violence—wars, pandemics, civil unrest; only violent shocks like these have substantially reduced inequality over the millennia.

“It is almost universally true that violence has been necessary to ensure the redistribution of wealth at any point in time,” said Scheidel, summarizing the thesis of The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century, his newly published book. Surveying long stretches of human history, Scheidel said that “the big equalizing moments in history may not have always had the same cause, but they shared one common root: massive and violent disruptions of the established order.”

This idea is connected to Thomas Piketty’s Capital in the Twenty-First Century (2013), a New York Times bestseller Scheidel admires. Piketty found that “inequality does not go down by itself because we have economic development,” Scheidel said. “His book covers only 200 years and argues that only violent intervention can make that happen.”

But Scheidel, who has taught a freshman seminar on long-term inequality, wanted to know if this insight can be applied to all of history. He enlisted the help of Andrew Granato, a senior majoring in economics, to compile a bibliography of more than 1,000 titles. The result is a sweeping narrative about the link between inequality and peace that harkens back to the beginning of human civilization.

Formulating such a narrative is no simple task. The Great Leveler primarily relies on the published works of other historians—a challenge, in Scheidel’s view, of trying “to synthesize highly fragmented and specialized scholarship and create a single narrative.” As an expert on ancient Rome, however, Scheidel is well aware that pre-modern sources are limited and some are invalid. His familiarity with scant ancient sources prepared him to grapple with an abundance of more reliable modern records. “Looking at the distant past would have been more difficult for a modernist economist or historian,” said Scheidel, for whom it is “generally easier to deal with modern evidence because it is more familiar and thoroughly studied.”

A grim view

Scheidel acknowledges his pessimism about resolving inequality. “Reversing the trend toward greater concentrations of income, in the United States and across the world, might be, in fact, nearly impossible,” he said. Among the wide variety of catastrophes that level societies, Scheidel identifies what he calls “four horsemen”: mass mobilization or state warfare, transformative revolution, state collapse and plague.

A textbook example of mass mobilization is World War II, a conflict that embroiled many developed countries and, key for Scheidel, “uniformly hugely reduced inequality.” As with Europe and Japan, he said, “in the U.S. there were massive tax increases, state intervention in the economy to support the war effort and increase output, which triggered a redistribution of resources, benefiting workers and harming the interests of the top 1 percent.”

Another “horseman” was the outbreak of the bubonic plague in 14th-century Eurasia. While war wreaks havoc on everything, a pandemic of this magnitude “kills a third of the population, but does not damage the physical infrastructure,” Scheidel said. “As a result, labor becomes scarce, wages grow and the gap between the rich and the poor narrows.”


Public health officials from Nevada are reporting on a case of a woman who died in Reno in September from an incurable infection. Testing showed the superbug that had spread throughout her system could fend off 26 different antibiotics. “It was tested against everything that’s available in the United States [...] and was not effective,” said Dr. Alexander Kallen, a medical officer in the Centers for Disease Control and Prevention’s division of health care quality promotion.

Although this isn’t the first time someone in the US has been infected with pan-resistant bacteria, at this point, it is not common. It is, however, alarming. “I think this is the harbinger of future badness to come,” said Dr. James Johnson, a professor of infectious diseases medicine at the University of Minnesota and a specialist at the Minnesota VA Medical Center.

Other scientists are saying this case is yet another sign that researchers and governments need to take antibiotic resistance seriously. It was reported Thursday in Morbidity and Mortality Weekly Report, a journal published by the CDC. The authors of the report note this case underscores the need for hospitals to ask incoming patients about foreign travel and also about whether they had recently been hospitalized elsewhere.

The case involved a woman who had spent considerable time in India, where multi-drug-resistant bacteria are more common than they are in the US. She had broken her right femur—the big bone in the thigh—while in India a couple of years back. She later developed a bone infection in her femur and her hip and was hospitalized a number of times in India in the two years that followed. Her last admission to a hospital in India was in June of last year. The unnamed woman—described as a resident of Washoe County who was in her 70s—went into hospital in Reno for care in mid-August, where it was discovered she was infected with what is called a CRE—carbapenem-resistant enterobacteriaceae. That’s a general name to describe bacteria that commonly live in the gut that have developed resistance to the class of antibiotics called carbapenems—an important last-line of defense used when other antibiotics fail. CDC Director Dr. Tom Frieden has called CREs “nightmare bacteria” because of the danger they pose for spreading antibiotic resistance. In the woman’s case, the specific bacteria attacking her was called Klebsiella pneumoniae, a bug that often causes of urinary tract infections.

Testing at the hospital showed resistance to 14 drugs—all the drug options the hospital had, said Lei Chen, a senior epidemiologist with Washoe County Health District and an author of the report. “It was my first time to see a [resistance] pattern in our area,” she said.


   Dr. Bruce Levine: August 30, 2013

Throughout history, societies have existed with far less coercion than ours, and while these societies have had far less consumer goods and what modernity calls “efficiency,” they also have had far less mental illness. This reality has been buried, not surprisingly, by uncritical champions of modernity and mainstream psychiatry. Coercion—the use of physical, legal, chemical, psychological, financial, and other forces to gain compliance—is intrinsic to our society’s employment, schooling, and parenting. However, coercion results in fear and resentment, which are fuels for miserable marriages, unhappy families, and what we today call mental illness.

Societies with Little Coercion and Little Mental Illness

Shortly after returning from the horrors of World War I and before they wrote Mutiny on the Bounty (1932), Charles Nordhoff and James Norman Hall were given a commission by Harper’s Magazine to write nonfiction travel articles about life in the South Pacific. Their reports about the islands of Paumoto, Society, and the Hervey group were first serialized in Harper’s and then published in the book Faery Lands of the South Seas (1921). Nordhoff and Hall were stuck by how little coercion occurred in these island cultures compared to their own society, and they were enchanted by the kind of children that such noncoercive parenting produced:

“There is a fascination in watching these youngsters, brought up without clothes and without restraint... Once they are weaned from their mothers’ breasts—which often does not occur until they have reached an age of two and a half or three—the children of the islands are left practically to shift for themselves; there is food in the house, a place to sleep, and a scrap of clothing if the weather be cool—that is the extent of parental responsibility. The child eats when it pleases, sleeps when and where it will, amuses itself with no other resources than its own. As it grows older certain light duties are expected of it—gathering fruit, lending a hand in fishing, cleaning the ground about the house—but the command to work is casually given and casually obeyed. Punishment is scarcely known... [Yet] the brown youngster flourishes with astonishingly little friction—sweet tempered, cheerful, never bored, and seldom quarrelsome.”

For many indigenous peoples, even the majority rule that most Americans call democracy is problematically coercive, as it results in the minority feeling resentful. Roland Chrisjohn, member of the Oneida Nation of the Confederacy of the Haudenausaunee (Iroquois) and author of The Circle Game, points out that for his people, it is deemed valuable to spend whatever time necessary to achieve consensus so as to prevent such resentment. By the standards of Western civilization, this is highly inefficient. “Achieving consensus could take forever!” exclaimed an attendee of a talk that I heard given by Chrisjohn, who responded, “What else is there more important to do?”

Among indigenous societies, there are many accounts of a lack of mental illness, a minimum of coercion, and wisdom that coercion creates resentment which fractures relationships. The 1916 book The Institutional Care of the Insane of the United States and Canada reports, “Dr. Lillybridge of Virginia, who was employed by the government to superintend the removal of Cherokee Indians in 1827-8-9, and who saw more than 20,000 Indians and inquired much about their diseases, informs us he never saw or heard of a case of insanity among them.” Psychiatrist E. Fuller Torrey, in his 1980 book Schizophrenia and Civilization, states, “Schizophrenia appears to be a disease of civilization.”

In 1973, Torrey conducted research in New Guinea, which he called “an unusually good country in which to do epidemiologic research because census records for even most remote villages are remarkably good.” Examining these records, he found, “There was over a twentyfold difference in schizophrenia prevalence among districts; those with a higher prevalence were, in general, those with the most contact with Western civilization.” In reviewing others’ research, Torrey concluded:

“Between 1828 and 1960, almost all observers who looked for psychosis or schizophrenia in technologically undeveloped areas of the world agreed that it was uncommon… The striking feature… is the remarkable consensus that insanity (in the early studies) and schizophrenia (in later studies) were comparatively uncommon prior to contact with European-American civilization… But around 1950 an interesting thing happened… the idea became current in psychiatric literature that schizophrenia occurs in about the same prevalence in all cultures and is not a disease of civilization.”

I have held a job, Howard!

   John Glenn: May 4th, 1974

Howard, I can’t believe you said I have never held a job. I served twenty-three years in the United States Marine Corps. I served through two wars. I flew 149 missions. My plane was hit by anti-aircraft fire on twelve different occasions. I was in the space program. It wasn't my checkbook; it was my life on the line.

It was not a nine-to-five job where I took time off to take the daily cash receipts to the bank. I ask you to go with me, as I went the other day, to a Veterans Hospital and look those men, with their mangled bodies, in the eye and tell them they didn't hold a job.

You go with me to any gold-star mother and you look her in the eye and tell her that her son did not hold a job.

You go with me to the space program, and go as I have gone to the widows and orphans of Ed White and Gus Grissom and Roger Chaffee, and you look those kids in the eye and tell them that their Dad didn't hold a job.

You go with me on Memorial Day coming up and you stand in Arlington National Cemetery, where I have more friends than I'd like to remember, and you watch those waving flags. You stand there, and you think about this nation, and you tell me that those people didn't have a job.

I'll tell you, Howard Metzenbaum, you should be on your knees every day of your life thanking God that there were some men – some men - who held a job. And they required a dedication to purpose and a love of country and a dedication to duty that was more important than life itself. And their self-sacrifice is what made this country possible.

I have held a job, Howard!

John Glenn’s ending rebuttal statement delivered during a debate with Howard Metzenbaum at the Cleveland City Club. At the time of the debate, Glenn and Metzenbaum were running against each other in the Ohio Democratic Primary for U.S. Senator. In a speech given a few weeks prior to the debate Metzenbaum stated that Glenn had never held a real job.

So many experts and so many predictions. What's their track record?

   Janet Torge & Josh Freed: October 18, 2013

We all count on experts, but should we trust them? Turns out, the more certain their pronouncements, the more likely they are to be wrong. The Trouble with Experts reminds us: we are all addicted to experts. They tell us what to eat, how to vote, raise our kids, fix our homes, buy our wines, interpret political events and, until recently, choose the right stocks. They're all over the media telling us what to think, because there's just too much information for us to sort out ourselves. So we often cede our own opinions to “them” because, well … they're experts, so they know better than us. Or do they?

In the recent stock meltdown, we discovered that some of our most important experts—our financial gurus—didn't know much at all. So what about all the other experts out there? Does having expertise actually mean you make better decisions than regular people? Or are they just part of a new cult of expertise, an ever-growing “expert industry” that's become our latest new religion?

“We all want wise men to give us the secret truth, the real low-down, the inside dope about things—someone who knows more than we mere mortals know,” says writer/director Josh Freed. But the reality is that many so-called experts don't know any more than you or me. In fact, a 20-year study of experts shows they're only right about half the time.

There are similar findings from other “experts on experts” we meet in the film, like Berkeley Psychology Professor Phillip Tetlock, Christopher Cerf (co-founder of the Institute of Expertology) and New Yorker science writer David Freedman (who's authored a new book called WRONG). Among their findings—the more famous the forecaster, the more overblown the forecasts, the more wrong they are.

The documentary features some astonishing stories of experts in the wrong. We meet Dutch artist John Myatt who used house paint and KY jelly to forge the works of Great Masters. He managed to fool top art critics and museums for 8 years before he was finally caught. Then there are the wine experts who can't even distinguish white wine from red and political experts whose predictions were only a tiny bit better than random guesses - the equivalent of a chimpanzee throwing darts at a board.

Appearing on TV makes experts even more wrong, say Philip Tetlock and Christopher Cerf. “Show producers don't want us to sit there listening to an expert thinking, ‘I could have said that myself.’ They want certainty, clarity and drama and they call on ‘experts’ who see things in black and white—or are happy to exaggerate their positions to sound more certain and entertaining. Adds Tetlock: “The experts who are most often accurate in our studies are cautious, quiet and somewhat more boring. Try selling that to a TV producer.”

So how do you become an expert anyway?


   Oriol Vinyals et al.: November 3, 2016
Algorithms usually need thousands of examples to learn something. Researchers at Google DeepMind found a way around that.

Most of us can recognize an object after seeing it once or twice. But the algorithms that power computer vision and voice recognition need thousands of examples to become familiar with each new image or word. Researchers at Google DeepMind now have a way around this. They made a few clever tweaks to a deep-learning algorithm that allows it to recognize objects in images and other things from a single example—something known as "one-shot learning." The team demonstrated the trick on a large database of tagged images, as well as on handwriting and language.

The best algorithms can recognize things reliably, but their need for data makes building them time-consuming and expensive. An algorithm trained to spot cars on the road, for instance, needs to ingest many thousands of examples to work reliably in a driverless car. Gathering so much data is often impractical—a robot that needs to navigate an unfamiliar home, for instance, can’t spend countless hours wandering around learning.

Oriol Vinyals, a research scientist at Google DeepMind, a U.K.-based subsidiary of Alphabet that’s focused on artificial intelligence, added a memory component to a deep-learning system—a type of large neural network that’s trained to recognize things by adjusting the sensitivity of many layers of interconnected components roughly analogous to the neurons in a brain. Such systems need to see lots of images to fine-tune the connections between these virtual neurons.

The team demonstrated the capabilities of the system on a database of labeled photographs called ImageNet. The software still needs to analyze several hundred categories of images, but after that it can learn to recognize new objects—say, a dog—from just one picture. It effectively learns to recognize the characteristics in images that make them unique. The algorithm was able to recognize images of dogs with an accuracy close to that of a conventional data-hungry system after seeing just one example.


The country into which young voters have recently been born finds itself in a state of depravity. Forty-two million Americans, including 14 million children, do not have enough food. Despite gains made under the rapidly disintegrating Affordable Care Act, nearly 30 million Americans do not have health insurance. Neither of these facts results from lack of food or medicine, as 50 years spent pumping chickens and cows with our copious reserves of penicillin and tetracycline have brought about a glut of environmentally unsustainable farm animals and a world on the precipice of antibiotic-resistant infections.

Black Americans, 50 years after Jim Crow, remain precarious in their claim to citizenry, subject to daily harassment and theft routinely punctuated by outright murder at the hands of the state. Over 2 million Americans are in prison. Those fortunate enough to graduate college have a debt load virtually impossible to discharge by almost any legal avenue while prospects for employment remain scarce. Those fortunate enough to be employed have not seen their wages correspond to productivity in 40 years. The economy—insufficiently terrorized by the 2008 collapse, in part because the perpetrators of that collapse remain effectively in control of their own regulators—is haunted by the possibility of another shock. Who is doing well? Silicon Valley libertarians, promising to disrupt us back to 19th-century labor relations, cheered on by members of the notionally liberal party.

In September, atmospheric measurements confirmed that industrial output, driven in large part by the United States, has lurched the world past 400-parts-per-million atmospheric carbon dioxide. At least 2 degrees Celsius of global warming and its attendant catastrophes are nearly inevitable.

Remember too that even this tenuous arrangement of a nation is maintained only by a grinding, ambient violence, the occupation, torture and incineration of the citizens of no fewer than six sovereign nations at present, a world kept at bay by the 23,000 bombs dropped by the American military annually and a world that that, for all this effort, does not appear content to remain at bay much longer. We are closer than we have been in a generation to the real possibility of nuclear war.


Editor's note:
In this article, Newsweek incorrectly predicted Hillary Clinton as the winner of the 2016 U.S. presidential election.

   Alex Graves & Greg Wayne: October 12, 2016

Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.

Human civilization has always been a virtual reality

   October 2, 2016 / June 28, 2016

Consciousness itself is fundamental to all our virtual realities. Consciousness is the media through which all our cultures, religions, civilizations, thoughts, and reality tunnels play out. So although what we have created can be regarded as virtual realities, consciousness itself doesn’t necessary fit that criterion. Consciousness is real, ‘Canada’ and ‘Canadian’ is imaginary, is virtual. Consciousness plays within the framework of ‘Canada’ and ‘Canadian’ but it isn’t those constructs on a fundamental level. Consciousness is not virtual reality, though it uses virtual realities to operate in.

What are we without our virtual realities? We are alive and indeed life itself. We are the end result [I'd disagree with the notion of ‘end result’ - Ed.] of billions of years of cosmological evolution. And we are consciousness. One of the all-time best comments for underscoring the insidious depth of this virtual reality projecting faculty of the mind is found in a kōan popularized by non-dualist author Adyashanti: “At the end of the day a real Buddhist realizes that there is no such thing as a real Buddhist.” So it is not as Morpheus says; “As long as the Matrix exists the human race will never be free.” We will always be building a Matrix to see, move, and operate through. Instead it is that as long as the Matrix exists and remains deeply unacknowledged as a Matrix and ever-indulged as “true” the human race will never be free. Consequently, nor will the global ecosystem likely survive; for as long as it remains a mere supporting character—or worse, has no role whatsoever—in our reality tunnels, the prominence it deserves in our decisions cannot be realized.

A trial that could lead to the greatest societal transformation of our time

Finland is about to launch an experiment in which a randomly selected group of 2,000–3,000 citizens already on unemployment benefits will begin to receive a monthly basic income of 560 euros (approx. $600). That basic income will replace their existing benefits. The amount is the same as the current guaranteed minimum level of Finnish social security support. The pilot study, running for two years in 2017-2018, aims to assess whether basic income can help reduce poverty, social exclusion, and bureaucracy, while increasing the employment rate.

The Finnish government introduced its legislative bill for the experiment on 25 August. Originally, the scope of the basic income experiment was much more ambitious. Many experts have criticized the government’s experiment for its small sample size and for the setup of the trial, which will be performed within just one experimental condition. This implies that the experiment can provide insights on only one issue, namely whether the removal of the disincentives embedded in social security will encourage those now unemployed to return to the workforce or not.


   Tim Urban: October 20, 2014

You go to school, study hard, get a degree, and you’re pleased with yourself. But are you wiser? You get a job, achieve things at the job, gain responsibility, get paid more, move to a better company, gain even more responsibility, get paid even more, rent an apartment with a parking spot, stop doing your own laundry, and you buy one of those $9 juices where the stuff settles down to the bottom. But are you happier?

You do all kinds of life things—you buy groceries, read articles, get haircuts, chew things, take out the trash, buy a car, brush your teeth, shit, sneeze, shave, stretch, get drunk, put salt on things, have sex with someone, charge your laptop, jog, empty the dishwasher, walk the dog, buy a couch, close the curtains, button your shirt, wash your hands, zip your bag, set your alarm, fix your hair, order lunch, act friendly to someone, watch a movie, drink apple juice, and put a new paper towel roll on the thing.

But as you do these things day after day and year after year, are you improving as a human in a meaningful way?


Research firm finds businesses led by lower-paid CEOs earn greater shareholder return

The study, carried out by corporate research firm MSCI, found that for every $100 (£76) invested in companies with the highest-paid CEOs would have grown to $265 (£202) over 10 years. But the same amount invested in the companies with the lowest-paid CEOs would have grown to $367 (£279) over a decade.

New report calculates that earnings did not rise for more than half a billion people between 2005 and 2014

Half a billion people in 25 of the west's richest countries suffered from flat or falling pay packets in the decade covering the financial and economic crisis of 2008-09, according to a report highlighting the impact of the Great Recession on household incomes.

Research by the McKinsey Global Institute found that between 65% and 70% of people in 25 advanced countries saw no increase in their earnings between 2005 and 2014.

The report found there had been a dramatic increase in the number of households affected by flat or falling incomes and that today's younger generation was at risk of ending up poorer than their parents. Only 2% of households, 10 million people, lived through the period from 1993 to 2005—a time of strong growth and falling unemployment—without seeing their incomes rise.

What Is a Neural Network?

It's a technique for building a computer program that learns from data. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure. For a more detailed introduction to neural networks, Michael Nielsen's Neural Networks and Deep Learning is a good place to start. For a more technical overview, try Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.

   July 4, 2016

At the moment, the prosperity of the wealthiest members of society is deeply dependent on some degree of social harmony. They need their middle managers to show up to work, their secretaries to organize their calendars and their janitors to make meeting rooms look respectable. The further automation progresses, the less the prospect of social disorder—or even a general strike—matters to the world of the rich.


For now, the material interests of wealthy [citizens - Ed.] remain bound up with the maintenance of political order to a considerable degree. Support for democracy remains strong, even among elites (though the level of that support has declined steadily in recent decades). But as inequality gets worse, that shared investment may fade. In an increasingly automated economy, the many are not only poor; from the perspective of the rich, they are also—for the first time in human history—dispensable.

As automation advances, the elites' costs of suppressing democracy will fall, while the costs of tolerating democracy (with its increasing risk of redistribution) will rise. With each passing day, they will have a stronger incentive to rebel against democracy rather than to accept that much of their wealth might be redirected to the increasingly large ranks of the destitute.

Ultimately, then, society faces one of two scenarios: Either the political system will figure out a way to redistribute wealth in a way that preserves a middle class, or the political system will lose the capacity to redistribute wealth because it becomes completely controlled by elites. It is possible to preserve the middle class through redistribution, preserving both widespread wealth and democratic stability. But the time window to do so is short—and it is already starting to close.