Category Archives: Root Computer

Toward optical quantum computing

Ordinarily, light particles — photons — don’t interact. If two photons collide in a vacuum, they simply pass through each other.

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations.

In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures.

But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces “nonlinearities” into the transmission of an optical signal.

“All of these approaches that had atoms or atom-like particles require low temperatures and work over a narrow frequency band,” says Dirk Englund, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper. “It’s been a holy grail to come up with methods to realize single-photon-level nonlinearities at room temperature under ambient conditions.”

Joining Englund on the paper are Hyeongrak Choi, a graduate student in electrical engineering and computer science, and Mikkel Heuck, who was a postdoc in Englund’s lab when the work was done and is now at the Technical University of Denmark.

Photonic independence

Quantum computers harness a strange physical property called “superposition,” in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal.

If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or — like Englund’s own research — defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain.

Because photons aren’t very susceptible to interactions with the environment, they’re great at maintaining superposition; but for the same reason, they’re difficult to control. And quantum computing depends on the ability to send control signals to the qubits.

That’s where the MIT researchers’ new work comes in. If a single photon enters their device, it will pass through unimpeded. But if two photons — in the right quantum states — try to enter the device, they’ll be reflected back.

The quantum state of one of the photons can thus be thought of as controlling the quantum state of the other. And quantum information theory has established that simple quantum “gates” of this type are all that is necessary to build a universal quantum computer.

Tips Uninstall MS-Office 2016

Microsoft Office is one of the most common suites of application, every computer user is using for home and business PC. The package includes MS Word, PowerPoint, Excel, Outlook, Access, Publisher and other applications. With time, Microsoft has launched various versions of MS Office. Every new version came up with some of the exciting features that enabled the users to do a number of tasks without any hassle and at the same platform. Today, you can install the MS Office on Windows OS, Mac OS or mobile phone.

Already using MS-Office 2016 but want to upgrade it to the latest version? For this, you need to uninstall the previously installed Office from your device (laptop, desktop or 2-in-1). You would also want to uninstall MS Office if there is some error in its working. However, to do so you don’t need to be a pro. This article will provide you the complete procedure for the same. Have a look:

  • The first step is to select your operating system
  • Download the Office uninstall tool, i.e. Microsoft Fix It
  • Run the Fix It and follow the on-screen instructions
  • Click “Apply this fix” and run the uninstaller for two minutes
  • Once it’s done, you will receive a message “The troubleshooter has successfully uninstalled MS Office”

The other method is to remove using MS Office 2016 from the Control Panel. Visit Control Panel, then select Add or Remove Program (Windows XP) or Programs and Features (Windows Vista and above). Find Microsoft Office 2016 from the list and click uninstall the program. Wait till the un-installation finishes and then reboot your system.

At any point of time, if you face an error in Office.com/SetupComputer Technology Articles, then contact Microsoft customer support professionals. The certified technicians will offer you a quick and reliable support and help you to uninstall the Microsoft Office Setup 2016.

Computer Hardware Firms Slip

Almost half of the computer hardware companies surveyed in a recent report had earnings returns that trailed even passbook savings accounts last year, according to an investment banking firm specializing in information technology company mergers.

The other side of the technology coin, information services companies-which include firms involved in software, publishing, printing and related supplies, marketing services, database and computer processing-fared better.

As a result, investors have fallen out of love with the high-tech hardware firms, once considered the darlings of Wall Street. The median market-price to book-value ratio was 1.4 to 1, the study found, which is about the same as representative cross-industry indexes. For information services companies, the ratio is 2.5 to 1.

Harvey Poppel, a partner at Broadview Associates, Ft. Lee, N.J., said his firm studied more than 463 publicly held U.S. information technology firms in both the hardware and information services sectors, with total revenues of more than $200 billion.

About 44 percent of the companies classified as computer and communications hardware firms had returns on equity of less than 5 percent. The median return was 6.5 percent. Hardware distribution and retail companies outpaced the manufacturing end, averaging about an 8 percent return.

The information services companies, however, averaged 15 percent returns, which is above the traditional 11 percent average for all U.S. companies, according to Poppel. The Standard & Poor`s average for all U.S. companies has not yet been released for 1987.

Among the information services companies, software products companies performed the worst, showing some of the same volatility as the hardware group, the study found.

“This is the worst performance for the hardware companies in the four years that we`ve been tracking these kind of results,“ he said. “For the previous two years, when the average was in the 7 to 8 percent range, we blamed it on the computer slump. But 1987 growth rates were pretty good.“

Revenue growth for hardware companies averaged 10 percent over the previous year. Not including International Business Machines Corp., revenues were up 14 percent.

“The turnaround is here, but the profits still aren`t,“ Poppel said.

One of the major handicaps faced by computer hardware makers is that their products, which have very high research and development costs, have short life cycles, Poppel said. In addition, most of these companies are involved in a global marketplace, and have suffered from Asian price slashing and consumers` continued expectations of yearly price and performance improvements. Services companies face few of these problems.

GeIL Releases EVO SPEAR DDR4 Hardcore Memory

GeIL – Golden Emperor International Ltd. – one of the world’s leading PC components & peripheral manufacturers announced the latest DDR4 hardcore gaming memory modules, EVO SPEAR Series, and EVO SPEAR AMD Edition Series. Available as single modules and kits up to 64GB, runs as low as 1.2V and at max 1.35V resulting in less power consumption and higher reliability. Featuring a stylish and stealthy standard-height heat spreader, EVO SPEAR Series modules can be used in most case designs including SFF (Small Form Factor) systems and full-sized gaming PCs. The EVO SPEAR Series and EVO SPEAR AMD Edition Series are available in black with a black PCB and are perfect for gamers and enthusiasts who want a cost-efficient upgrade for faster gaming, video editing, and 3D rendering.

EVO SPEAR Series available in 2133MHz to 3466MHz frequencies, optimized for the Intel® Core™ X, i7, i5 Processors as well as Z200 and X299 Series Chipset.

EVO SPEAR AMD Edition Series fully compatible with the latest AMD Ryzen 7, Ryzen 5 Processors and AM4 Motherboards. Available in 2133MHz to 3200MHz frequencies.

“EVO SPEAR Series, and EVO SPEAR AMD Edition Series are ideal for gamers, enthusiasts, and case modders looking to maximize gaming performance with minimum investment.” – says Jennifer Huang, the Vice President of GeIL.

Fractal Design Launches New Tempered Glass Define C Chassis

So many cases on the market today are made to be all things to all people. However, for many this results in a chassis full of empty bays, unused mounts and excess bulk. Created for those who demand a flexible platform for a powerful ATX or Micro ATX build that wastes no space, the Define C TG Series is the perfect solution to satisfy this balance of capacity and efficiency while opening up the side thanks to a full tempered glass side panel.

Smaller than the usual ATX and Micro ATX case, the Define C TG and Define Mini C TG with its optimized interior provides the perfect base for users. The open air design offers unobstructed airflow across your core components with high performance and silent computing in mind at every step.

Extensive cooling support via both air and water are offered to make sure even the most powerful systems can be cooled effectively. Carrying signature Define series traits, the Define C TG Series brings with it that iconic front panel design, dense sound dampening material throughout and ModuVent technology in the top panel. Those wanting to remove the ModuVent to add more fans or a radiator can install in its place the new magnetic dust filter and a built in power supply shroud helps offer an unmatched level of cable management.

Our team of engineers in Sweden made sure performance without restrictions was paramount. With innovative design, the Define C TG Series brings your system together in a truly exquisite way, reminding us why we choose Fractal Design.

Define C TG and Define Mini C TG Key Features:

  • Define Series sound dampening with ModuVent™ technology for silent operation in a compact full ATX or Micro ATX form factor
  • Optimized for high airflow and silent computing
  • Tempered glass side panel for a clean looking exterior with full interior visibility
  • Side and front panels are lined with industrial-grade sound dampening material
  • Flexible storage options with room for up to 5 drives
  • Comes with two preinstalled Fractal Design Dynamic X2 GP-12 120 mm fans optimized to deliver maximum airflow while still maintaining a low noise level
  • (Define Mini C TG) Equipped with 5 PCI expansion slots for powerful dual GPU setups.
  • Open air designed interior creates an unobstructed airflow path from the front intake to the rear exhaust
  • Easy-to-clean high airflow nylon filters on the front and base with full PSU coverage and front access for convenience.
  • Includes optional top filter to prevent dust buildup when ModuVent is removed for additional fan slots.
  • Power supply shroud conceals drive cage and excess cabling for an even quieter and cleaner looking interior free of airflow obstructions

Why Hardware Engineers Should Think Like Cybercriminals

The future of cybersecurity is in the hands of hardware engineers. That’s what Scott Borg, director of the U.S. Cyber Consequences Unit, told 130 chief technical officers, engineering directors, and key researchers from MEMS and sensors companies and laboratories Thursday morning.

Borg, speaking at the MEMS and Sensors Technical Congress, held on the campus of Stanford University, warned that “the people in this room are now moving into the crosshairs of cyberhackers in a way that has never happened before.”

And Borg should know. He and his colleagues at the Cyber Consequences Unit (a nonprofit research institute) predicted the Stuxnet attack and some major developments in cybercrime over the last 15 years.

Increasingly, hackers are focusing on hardware, not on software, particularly equipment in industry, he indicated.

“Initially,” he said, “they focused on operations control, monitoring different locations from a central site. Then they moved to process control, including programmable logic controllers and local networks. Then they migrated to embedded devices and the ability to control individual pieces of equipment. Now they are migrating to the actual sensors, the MEMS devices.”

“You can imagine countless attacks manipulating physical things,” Borg said. And imagining those things definitely keeps him up at night—it’s not easy being a cybersecurity guru.

“Yesterday,” he said, while on a tour of a nanofab facility, “I saw tanks full of dangerous chemicals, controlled by computers moving things in and out. I immediately thought about which would be the prevailing direction of wind and how you could rupture the tanks with cyberattack. Whenever I look at an appliance, I think what could be done to it that causes maximum damage and embarrassment.”

The move to attacking hardware, just like any cyberattack, comes because hackers are thinking about the economics, Borg says. Hackers always profit in some way from their attacks, though the gain is not always monetary.

One way hardware hackers can profit by hurting a company can be by taking advantage of the resulting drop in its stock price; stock manipulation is a growth area for cybercrime in general, says Borg.

“There is a limit to how much you can steal from credit card fraud; there is no limit to how much you can make in taking a position in a market and making something happen,” Borg says. “You can short a company’s stock in a highly leveraged way, then attack the company in a way that makes stock fall, reinvest on the way down, and multiply your investment hundreds of times. This is a big growth area for cybercrime; it has been done multiple times already, but it is really just starting to get under way. This is going to be a huge area for cybercriminals.”

It is going to be up to engineers to stop this coming hardware cybercrime wave. And it’s not going to be easy because “engineers aren’t as easy to fool as scientists, but they are still really easy to fool.

“Engineers believe in data, in gauges, in measurements. They are a little less easy to fool than scientists in that they build physical systems that operate, and when they fail, they do have to try to figure out why and what real world effects are. But engineers aren’t used to dealing with unkind adversaries. They believe in statistics, where statistical distributions are normal, where probabilities can deal with independent variables. And statistics doesn’t work in a cyberworld. If you are up against a cunning adversary, who will behave in ways outside of normal, it is hard to use any of the techniques we use in the natural world. A cyberadversary will take advantage of unlikely circumstances.”

But, he said, if engineers, particularly design engineers, learn to understand the cybercriminal and think proactively about cyberattacks, they can often improve cybersecurity and do it for free.

“Increasing security isn’t always about layering on security [to a completed system], but about how you implement a certain function in the first place, and that choice often doesn’t cost more,” Borg says. “Decisions that are made in engineering at really fine-grained levels affect the costs of carrying out a cyberattack. Even a small sensor will have consequences for cybersecurity, not always in the immediate device, but as it develops into a product line.”

Engineers, therefore, need to look at their products from the standpoint of the attacker, and consider how attacker would benefit from cyberattack and how to make undertaking that attack more expensive. It’s all about working to increase an attacker’s costs, he says.

“As we move into embedded controllers and microdevices, we move into a realm that cybersecurity specialists like me haven’t explored that much yet,” he says. “The hackers haven’t explored it yet either,” but, Borg warns, they will.

“You people are now in the crosshairs; [design] decisions you are making will have powerful security implications. They will in some cases wipe out your competitive advantage, or give you a huge one. Nobody can tell you what to do beyond what I’ve told you—that it’s all about the economics,” he says. “All I can do is make you aware of the world we have moved into, to make you aware that you are now in the crosshairs.”

Bad at Math, Good at Everything Else

Painful exercises in basic arithmetic are a vivid part of our elementary school memories. A multiplication like 3,752 × 6,901 carried out with just pencil and paper for assistance may well take up to a minute. Of course, today, with a cellphone always at hand, we can quickly check that the result of our little exercise is 25,892,552. Indeed, the processors in modern cellphones can together carry out more than 100 billion such operations per second. What’s more, the chips consume just a few watts of power, making them vastly more efficient than our slow brains, which consume about 20 watts and need significantly more time to achieve the same result.

Of course, the brain didn’t evolve to perform arithmetic. So it does that rather badly. But it excels at processing a continuous stream of information from our surroundings. And it acts on that information—sometimes far more rapidly than we’re aware of. No matter how much energy a conventional computer consumes, it will struggle with feats the brain finds easy, such as understanding language and running up a flight of stairs.

If we could create machines with the computational capabilities and energy efficiency of the brain, it would be a game changer. Robots would be able to move masterfully through the physical world and communicate with us in plain language. Large-scale systems could rapidly harvest large volumes of data from business, science, medicine, or government to detect novel patterns, discover causal relationships, or make predictions. Intelligent mobile applications like Siri or Cortana would rely less on the cloud. The same technology could also lead to low-power devices that can support our senses, deliver drugs, and emulate nerve signals to compensate for organ damage or paralysis.

But isn’t it much too early for such a bold attempt? Isn’t our knowledge of the brain far too limited to begin building technologies based on its operation? I believe that emulating even very basic features of neural circuits could give many commercially relevant applications a remarkable boost. How faithfully computers will have to mimic biological detail to approach the brain’s level of performance remains an open question. But today’s brain-inspired, or neuromorphic, systems will be important research tools for answering it.

A key feature of conventional computers is the physical separation of memory, which stores data and instructions, from logic, which processes that information. The brain holds no such distinction. Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses). Most of what the brain does is determined by those connections and by the manner in which each neuron responds to incoming signals from other neurons.

When we talk about the extraordinary capabilities of the human brain, we are usually referring to just the latest addition in the long evolutionary process that constructed it: the neocortex. This thin, highly folded layer forms the outer shell of our brains and carries out a diverse set of tasks that includes processing sensory inputs, motor control, memory, and learning. This great range of abilities is accomplished with a rather uniform structure: six horizontal layers and a million 500-micrometer-wide vertical columns all built from neurons, which integrate and distribute electrically coded information along tendrils that extend from them—the dendrites and axons.

Like all the cells in the human body, a neuron normally has an electric potential of about –70 millivolts between its interior and exterior. This membrane voltage changes when a neuron receives signals from other neurons connected to it. And if the membrane voltage rises to a critical threshold, it forms a voltage pulse, or spike, with a duration of a few milliseconds and a value of about 40 mV. This spike propagates along the neuron’s axon until it reaches a synapse, the complex biochemical structure that connects the axon of one neuron to a dendrite of another. If the spike meets certain criteria, the synapse transforms it into another voltage pulse that travels down the branching dendrite structure of the receiving neuron and contributes either positively or negatively to its cell membrane voltage.

Connectivity is a crucial feature of the brain. The pyramidal cell, for example—a particularly important kind of cell in the human neocortex—contains about 30,000 synapses and so 30,000 inputs from other neurons. And the brain is constantly adapting. Neuron and synapse properties—and even the network structure itself—are always changing, driven mostly by sensory input and feedback from the environment.

General-purpose computers these days are digital rather than analog, but the brain is not as easy to categorize. Neurons accumulate electric charge just as capacitors in electronic circuits do. That is clearly an analog process. But the brain also uses spikes as units of information, and these are fundamentally binary: At any one place and time, there is either a spike or there is not. Electronically speaking, the brain is a mixed-signal system, with local analog computing and binary-spike communication. This mix of analog and digital helps the brain overcome transmission losses. Because the spike essentially has a value of either 0 or 1, it can travel a long distance without losing that basic information; it is also regenerated when it reaches the next neuron in the network.

Another crucial difference between brains and computers is that the brain accomplishes all its information processing without a central clock to synchronize it. Although we observe synchronization events—brain waves—they are self-organized, emergent products of neural networks. Interestingly, modern computing has started to adopt brainlike asynchronicity, to help speed up computation by performing operations in parallel. But the degree and the purpose of parallelism in the two systems are vastly different.

Can We Quantify Machine Consciousness?

Imagine that at some time in the not-too-distant future, you’ve bought a smartphone that comes bundled with a personal digital assistant (PDA) living in the cloud. You assign a sexy female voice to the PDA and give it access to all of your emails, social media accounts, calendar, photo album, contacts, and other bits and flotsam of your digital life. She—for that’s how you quickly think of her—knows you better than your mother, your soon-to-be ex-wife, your friends, or your therapist. Her command of English is flawless; you have endless conversations about daily events; she gets your jokes. She is the last voice you hear before you drift off to sleep and the first upon awakening. You panic when she’s off-line. She becomes indispensable to your well-being and so, naturally, you fall in love. Occasionally, you wonder whether she truly reciprocates your feelings and whether she is even capable of experiencing anything at all. But the warm, husky tone of her voice and her ability to be that perfect foil to your narcissistic desires overcome these existential doubts. Alas, your infatuation eventually cools off after you realize she is carrying on equally intimate conversations with thousands of other customers.

This, of course, is the plot of Her, a 2013 movie in which an anodyne Theodore Twombly falls in love with the software PDA Samantha.

Over the next few decades such a fictional scenario will become real and commonplace. Deep machine learning, speech recognition, and related technologies have dramatically progressed, leading to Amazon’s Alexa, Apple’s Siri, Google’s Now, and Microsoft’s Cortana. These virtual assistants will continue to improve until they become hard to distinguish from real people, except that they’ll be endowed with perfect recall, poise, and patience—unlike any living being.

The availability of such digital simulacra of many qualities we consider uniquely human will raise profound scientific, psychological, philosophical, and ethical questions. These emulations will ultimately upend the way we think about ourselves, about human exceptionalism, and about our place in the great scheme of things.

Here we will survey the intellectual lay of the land concerning these coming developments. Our view is that as long as such machines are based on present-day computer architectures, they may act just like people—and we may be tempted to treat them that way—but they will, in fact, feel nothing at all. If computers are built more like the brain is, though, they could well achieve true consciousness.

The faith of our age is faith in the digital computer—programmed properly, it will give us all we wish. Cornucopia. Indeed, smart money in Silicon Valley holds that digital computers will be able to replicate and soon exceed anything and everything that humans are capable of.

But could sufficiently advanced computers ever become conscious? One answer comes from those who subscribe to computationalism, the reigning theory of mind in contemporary philosophy, psychology, and neuroscience. It avers that all mental states—such as your conscious experience of a god-awful toothache or the love you feel for your partner—are computational states. These are fully characterized by their functional relationships to relevant sensory inputs, behavioral outputs, and other computational states in between. That is, brains are elaborate input-output devices that compute and process symbolic representations of the world. Brains are computers, with our minds being the software.

Adherents to computationalism apply these precepts not only to brains and to the behavior they generate but also to the way it feels to be a brain in a particular state. After all, that’s what consciousness is: any subjective feeling, any experience—what we see, hear, feel, remember, think.

Computationalism assumes that my painful experience of a toothache is but a state of my brain in which certain nerve cells are active in response to the infected tooth, leading to my propensity to moan, hold my jaw, not eat on that side of my mouth, inability to focus on other tasks, and so on. If all of these states are simulated in software on a digital computer, the thinking goes, the system as a whole will not only behave exactly like me but also feel and think exactly like me. That is, consciousness is computable. Explicitly or implicitly, this is one of the central tenets held by the digerati in academe, media, and industry.

In this view, there is nothing more to consciousness than the instantiation of the relevant computational states. Nothing else matters, including how the computations are implemented physically, whether on the hardware of a digital computer or on the squishy stuff inside the skull. According to computationalism, a future Samantha—or even better, an embodied example like Ava in the brilliant, dark movie Ex Machina—will have experiences and feelings just as we do. She will experience sights and sounds, pleasure and pain, love and hate.

The Benefits of Building an Artificial Brain

In the mid-1940s, a few brilliant people drew up the basic blueprints of the computer age. They conceived a general-purpose machine based on a processing unit made up of specialized subunits and registers, which operated on stored instructions and data. Later inventions—transistors, integrated circuits, solid-state memory—would supercharge this concept into the greatest tool ever created by humankind.

So here we are, with machines that can churn through tens of quadrillions of operations per second. We have voice-recognition-enabled assistants in our phones and homes. Computers routinely thrash us in our ancient games. And yet we still don’t have what we want: machines that can communicate easily with us, understand and anticipate our needs deeply and unerringly, and reliably navigate our world.

Now, as Moore’s Law seems to be starting some sort of long goodbye, a couple of themes are dominating discussions of computing’s future. One centers on quantum computers and stupendous feats of decryption, genome analysis, and drug development. The other, more interesting vision is of machines that have something like human cognition. They will be our intellectual partners in solving some of the great medical, technical, and scientific problems confronting humanity. And their thinking may share some of the fantastic and maddening beauty, unpredictability, irrationality, intuition, obsessiveness, and creative ferment of our own.

In this issue, we consider the advent of neuromorphic computing and its prospects for ushering in a new age of truly intelligent machines. It is already a sprawling enterprise, being propelled in part by massive research initiatives in the United States and Europe aimed at plumbing the workings of the human brain. Parallel engineering efforts are now applying some of that knowledge to the creation of software and specialized hardware that “learn”—that is, get more adept—by repeated exposure to computational challenges.

Brute speed and clever algorithms have already produced machines capable of equaling or besting us at activities we’ve long thought of as deeply human: not just poker and Go but also stock picking, language translation, facial recognition, drug discovery and design, and the diagnosis of several specific diseases. Pretty soon, speech recognition, driving, and flying will be on that list, too.

The emergence of special-purpose hardware, such as IBM’s TrueNorth chips and the University of Manchester’s SpiNNaker, will eventually make the list longer. And yet, our intuition (which for now remains uniquely ours) tells us that even then we’ll be no closer to machines that can, through learning, become capable of making their way in our world in an engaging and yet largely independent way.

To produce such a machine we will have to give it common sense. If you act erratically, for example, this machine will recall that you’re going through a divorce and subtly change the way it deals with you. If it’s trying to deliver a package and gets no answer at your door, but hears a small engine whining in your backyard, it will come around to see if there’s a person (or machine) back there willing to accept the package. Such a machine will be able to watch a motion picture, then decide how good it is and write an astute and insightful review of the movie.

But will this machine actually enjoy the movie? And, just as important, will we be able to know if it does? Here we come inevitably to the looming great challenge, and great puzzle, of this coming epoch: machine consciousness. Machines probably won’t need consciousness to outperform us in almost every measurable way. Nevertheless, deep down we will surely regard them with a kind of disdain if they don’t have it.

Trying to create consciousness may turn out to be the way we finally begin to understand this most deeply mysterious and precious of all human attributes. We don’t understand how conscious experience arises or its purpose in human beings—why we delight in the sight of a sunset, why we are stirred by the Eroica symphony, why we fall in love. And yet, consciousness is the most remarkable thing the universe has ever created. If we, too, manage to create it, it would be humankind’s supreme technological achievement, a kind of miracle that would fundamentally alter our relationship with our machines, our image of ourselves, and the future of our civilization.

We Could Build an Artificial Brain Right Now

Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along. Work at my lab and others around the world over the past 35 years has led to artificial neural components like synapses and dendrites that respond to and produce electrical signals much like the real thing.

So, what would it take to integrate these building blocks into a brain-scale computer? In 2013, Bo Marr, a former graduate student of mine at Georgia Tech, and I looked at the best engineering and neuroscience knowledge of the time and concluded that it should be possible to build a silicon version of the human cerebral cortex with the transistor technology then in production. What’s more, the resulting machine would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain.

That is not to say creating such a computer would be easy. The system we envisioned would still require a few billion dollars to design and build, including some significant packaging innovations to make it compact. There is also the question of how we would program and train the computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brainlike activity into useful engineering applications.

Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. These gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip—even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of 1,000. The most brainlike neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand.