Monthly Archives: May 2017

The Benefits of Building an Artificial Brain

In the mid-1940s, a few brilliant people drew up the basic blueprints of the computer age. They conceived a general-purpose machine based on a processing unit made up of specialized subunits and registers, which operated on stored instructions and data. Later inventions—transistors, integrated circuits, solid-state memory—would supercharge this concept into the greatest tool ever created by humankind.

So here we are, with machines that can churn through tens of quadrillions of operations per second. We have voice-recognition-enabled assistants in our phones and homes. Computers routinely thrash us in our ancient games. And yet we still don’t have what we want: machines that can communicate easily with us, understand and anticipate our needs deeply and unerringly, and reliably navigate our world.

Now, as Moore’s Law seems to be starting some sort of long goodbye, a couple of themes are dominating discussions of computing’s future. One centers on quantum computers and stupendous feats of decryption, genome analysis, and drug development. The other, more interesting vision is of machines that have something like human cognition. They will be our intellectual partners in solving some of the great medical, technical, and scientific problems confronting humanity. And their thinking may share some of the fantastic and maddening beauty, unpredictability, irrationality, intuition, obsessiveness, and creative ferment of our own.

In this issue, we consider the advent of neuromorphic computing and its prospects for ushering in a new age of truly intelligent machines. It is already a sprawling enterprise, being propelled in part by massive research initiatives in the United States and Europe aimed at plumbing the workings of the human brain. Parallel engineering efforts are now applying some of that knowledge to the creation of software and specialized hardware that “learn”—that is, get more adept—by repeated exposure to computational challenges.

Brute speed and clever algorithms have already produced machines capable of equaling or besting us at activities we’ve long thought of as deeply human: not just poker and Go but also stock picking, language translation, facial recognition, drug discovery and design, and the diagnosis of several specific diseases. Pretty soon, speech recognition, driving, and flying will be on that list, too.

The emergence of special-purpose hardware, such as IBM’s TrueNorth chips and the University of Manchester’s SpiNNaker, will eventually make the list longer. And yet, our intuition (which for now remains uniquely ours) tells us that even then we’ll be no closer to machines that can, through learning, become capable of making their way in our world in an engaging and yet largely independent way.

To produce such a machine we will have to give it common sense. If you act erratically, for example, this machine will recall that you’re going through a divorce and subtly change the way it deals with you. If it’s trying to deliver a package and gets no answer at your door, but hears a small engine whining in your backyard, it will come around to see if there’s a person (or machine) back there willing to accept the package. Such a machine will be able to watch a motion picture, then decide how good it is and write an astute and insightful review of the movie.

But will this machine actually enjoy the movie? And, just as important, will we be able to know if it does? Here we come inevitably to the looming great challenge, and great puzzle, of this coming epoch: machine consciousness. Machines probably won’t need consciousness to outperform us in almost every measurable way. Nevertheless, deep down we will surely regard them with a kind of disdain if they don’t have it.

Trying to create consciousness may turn out to be the way we finally begin to understand this most deeply mysterious and precious of all human attributes. We don’t understand how conscious experience arises or its purpose in human beings—why we delight in the sight of a sunset, why we are stirred by the Eroica symphony, why we fall in love. And yet, consciousness is the most remarkable thing the universe has ever created. If we, too, manage to create it, it would be humankind’s supreme technological achievement, a kind of miracle that would fundamentally alter our relationship with our machines, our image of ourselves, and the future of our civilization.

We Could Build an Artificial Brain Right Now

Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along. Work at my lab and others around the world over the past 35 years has led to artificial neural components like synapses and dendrites that respond to and produce electrical signals much like the real thing.

So, what would it take to integrate these building blocks into a brain-scale computer? In 2013, Bo Marr, a former graduate student of mine at Georgia Tech, and I looked at the best engineering and neuroscience knowledge of the time and concluded that it should be possible to build a silicon version of the human cerebral cortex with the transistor technology then in production. What’s more, the resulting machine would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain.

That is not to say creating such a computer would be easy. The system we envisioned would still require a few billion dollars to design and build, including some significant packaging innovations to make it compact. There is also the question of how we would program and train the computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brainlike activity into useful engineering applications.

Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. These gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip—even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of 1,000. The most brainlike neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand.

U.S. Slips in New Top500 Supercomputer Ranking

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLight came in first on June’s TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

Tapwrit was the second favorite at Belmont, and Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this year’s benchmark tests, was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi, China. Tianhe-2, capable of almost 34 petaflops, was developed by China’s National University of Defense Technology (NUDT), is deployed at the National Supercomputer Center in Guangzho, and still enjoys the number-two position on the list.

More of a surprise, and perhaps more of a disappointment for some, is that the highest-ranking U.S. contender, the Department of Energy’s Titan supercomputer (17.6 petaflops) housed at Oak Ridge National Laboratory, was edged out of the third position by an upgraded Swiss supercomputer called Piz Daint (19.6 petaflops), installed at the Swiss National Supercomputing Center, part of the Swiss Federal Institute of Technology (ETH) in Zurich.

Not since 1996 has a U.S. supercomputer not made it into one of the first three slots on the TOP500 list. But before we go too far in lamenting the sunset of U.S. supercomputing prowess, we should pause for a moment to consider that the computer that bumped it from the number-three position was built by Cray and is stuffed with Intel processors and NVIDIA GPUs, all the creations of U.S. companies.

Even the second-ranking Tianhe-2 is based on Intel processors and co-processors. It’s only the TaihuLight that is truly a Chinese machine, being based on the SW26010, a 260-core processor designed by the National High Performance Integrated Circuit Design Center in Shanghai. And U.S. supercomputers hold five of the 10 highest ranking positions on the new TOPS500 list.

Still, national rivalries seem to have locked the United States into a supercomputer arms race with China, with both nations vying to be the first to reach the exascale threshold—that is, to have a computer that can perform a 1018 floating-point operations per second. China hopes to do so by amassing largely conventional hardware and is slated to have a prototype system readyaround the end of this year. The United States, on the other hand, is looking to tackle the problems that come with scaling to that level using novel approaches, which require more research before even a prototype machine can be built. Just last week, the U.S. Department of Energy announced that it was awarding Advanced Micro Devices, Cray, Hewlett Packard, IBM, Intel, and NVIDIA US $258 million to support research toward building an exascale supercomputer. Who will get there first, is, of course, up for grabs. But one thing’s for sure: It’ll be a horse race worth watching.

Raspberry Pi Merger With CoderDojo Isn’t All It Seems

This past Friday, the Raspberry Pi Foundation and the CoderDojo Foundationbecame one. The Raspberry Pi Foundation described it as “a merger that will give many more young people all over the world new opportunities to learn how to be creative with technology.” Maybe. Or maybe not. Before I describe why I’m a bit skeptical, let me first take a moment to explain more about what these two entities are.

The Raspberry Pi Foundation is a charitable organization created in the U.K. in 2009. Its one-liner mission statement says it works to “put the power of digital making into the hands of people all over the world.” In addition to designing and manufacturing an amazingly popular line of inexpensive single-board computers—the Raspberry Pi—the Foundation has also worked very hard at providing educational resources.

The CoderDojo Foundation is an outgrowth of a volunteer-led, community-based programming club established in Cork, Ireland in 2011. That model was later cloned in many other places and can now be found in 63 countries, where local coding clubs operate under the CoderDojo banner.

So both organizations clearly share a keen interest in having young people learn about computers and coding. Indeed, the Raspberry Pi Foundation had earlier merged with Code Club, yet another U.K. organization dedicated to helping young people learn to program computers. With all this solidarity of purpose, it would seem only natural for such entities to team up, or so you might think. Curmudgeon as I am, though, I’d like to share a different viewpoint.

The issue is that, well, I don’t think that the Raspberry Pi is a particularly good vehicle to teach young folks to code. I know that statement will be considered blasphemy in some circles, but I stand by it.

The problem is that for students just getting exposed to coding, the Raspberry Pi is too complicated to use as a teaching tool and too limited to use as a practical tool. If you want to learn physical computing so that you can build something that interacts with sensors and actuators, better to use an 8-bit Arduino. And if you want to learn how to write software, better to do your coding on a normal laptop.

That’s not to say that the Raspberry Pi isn’t a cool gizmo or that some young hackers won’t benefit by using them to build projects—surely that’s true. It’s just not the right place to start in general. Kids are overwhelmingly used to working in OSx or Windows. Do they really need to switch to Linux to learn to code? Of course not. And that just adds a thick layer of complication and expense.

My opinions here are mostly shaped by my (albeit limited) experiences trying to help young folks learn to code, which I’ve been doing during the summer for the past few years as the organizer of a local CoderDojo workshop. I’ve brought in a Raspberry Pi on occassion and shown kids some interesting things you can do with one, for example, turning a Kindle into a cycling computer. But the functionality of the Raspberry Pi doesn’t impress these kids, who just compare them with their smartphones. And the inner workings of the RasPi are as inaccessible to them as the inner workings of their smartphones. So it’s not like you can use a RasPi to help them grasp the basics of digital electronics.

The one experience I had using the Raspberry Pi to teach coding was disastrous. While there were multiple reasons for things not going well, one was that the organizer wanted to have the kids “build their own computers,” which amounted to putting a Raspberry Pi into a case and attaching it to a diminutive keyboard and screen. Yes, kids figured out how to do that quickly enough, but that provided them with a computer that was ill suited for much of anything, especially for learning coding.

So I worry that the recent merger just glosses over the fact that teaching kids to code and putting awesome single-board computers into the hands of makers are really two different exercises. I’m sure Eben Upton and lots of professional educators will disagree with me. But as I see things, channeling fledgling coders into using a Raspberry Pi to learn to program computers is counterproductive, despite surface indications that this is what we should be doing. And to my mind, the recent merger only promises to spread the misperception.