Category Archives: Root Computer

U.S. Slips in New Top500 Supercomputer Ranking

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLight came in first on June’s TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

Tapwrit was the second favorite at Belmont, and Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this year’s benchmark tests, was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi, China. Tianhe-2, capable of almost 34 petaflops, was developed by China’s National University of Defense Technology (NUDT), is deployed at the National Supercomputer Center in Guangzho, and still enjoys the number-two position on the list.

More of a surprise, and perhaps more of a disappointment for some, is that the highest-ranking U.S. contender, the Department of Energy’s Titan supercomputer (17.6 petaflops) housed at Oak Ridge National Laboratory, was edged out of the third position by an upgraded Swiss supercomputer called Piz Daint (19.6 petaflops), installed at the Swiss National Supercomputing Center, part of the Swiss Federal Institute of Technology (ETH) in Zurich.

Not since 1996 has a U.S. supercomputer not made it into one of the first three slots on the TOP500 list. But before we go too far in lamenting the sunset of U.S. supercomputing prowess, we should pause for a moment to consider that the computer that bumped it from the number-three position was built by Cray and is stuffed with Intel processors and NVIDIA GPUs, all the creations of U.S. companies.

Even the second-ranking Tianhe-2 is based on Intel processors and co-processors. It’s only the TaihuLight that is truly a Chinese machine, being based on the SW26010, a 260-core processor designed by the National High Performance Integrated Circuit Design Center in Shanghai. And U.S. supercomputers hold five of the 10 highest ranking positions on the new TOPS500 list.

Still, national rivalries seem to have locked the United States into a supercomputer arms race with China, with both nations vying to be the first to reach the exascale threshold—that is, to have a computer that can perform a 1018 floating-point operations per second. China hopes to do so by amassing largely conventional hardware and is slated to have a prototype system readyaround the end of this year. The United States, on the other hand, is looking to tackle the problems that come with scaling to that level using novel approaches, which require more research before even a prototype machine can be built. Just last week, the U.S. Department of Energy announced that it was awarding Advanced Micro Devices, Cray, Hewlett Packard, IBM, Intel, and NVIDIA US $258 million to support research toward building an exascale supercomputer. Who will get there first, is, of course, up for grabs. But one thing’s for sure: It’ll be a horse race worth watching.

Raspberry Pi Merger With CoderDojo Isn’t All It Seems

This past Friday, the Raspberry Pi Foundation and the CoderDojo Foundationbecame one. The Raspberry Pi Foundation described it as “a merger that will give many more young people all over the world new opportunities to learn how to be creative with technology.” Maybe. Or maybe not. Before I describe why I’m a bit skeptical, let me first take a moment to explain more about what these two entities are.

The Raspberry Pi Foundation is a charitable organization created in the U.K. in 2009. Its one-liner mission statement says it works to “put the power of digital making into the hands of people all over the world.” In addition to designing and manufacturing an amazingly popular line of inexpensive single-board computers—the Raspberry Pi—the Foundation has also worked very hard at providing educational resources.

The CoderDojo Foundation is an outgrowth of a volunteer-led, community-based programming club established in Cork, Ireland in 2011. That model was later cloned in many other places and can now be found in 63 countries, where local coding clubs operate under the CoderDojo banner.

So both organizations clearly share a keen interest in having young people learn about computers and coding. Indeed, the Raspberry Pi Foundation had earlier merged with Code Club, yet another U.K. organization dedicated to helping young people learn to program computers. With all this solidarity of purpose, it would seem only natural for such entities to team up, or so you might think. Curmudgeon as I am, though, I’d like to share a different viewpoint.

The issue is that, well, I don’t think that the Raspberry Pi is a particularly good vehicle to teach young folks to code. I know that statement will be considered blasphemy in some circles, but I stand by it.

The problem is that for students just getting exposed to coding, the Raspberry Pi is too complicated to use as a teaching tool and too limited to use as a practical tool. If you want to learn physical computing so that you can build something that interacts with sensors and actuators, better to use an 8-bit Arduino. And if you want to learn how to write software, better to do your coding on a normal laptop.

That’s not to say that the Raspberry Pi isn’t a cool gizmo or that some young hackers won’t benefit by using them to build projects—surely that’s true. It’s just not the right place to start in general. Kids are overwhelmingly used to working in OSx or Windows. Do they really need to switch to Linux to learn to code? Of course not. And that just adds a thick layer of complication and expense.

My opinions here are mostly shaped by my (albeit limited) experiences trying to help young folks learn to code, which I’ve been doing during the summer for the past few years as the organizer of a local CoderDojo workshop. I’ve brought in a Raspberry Pi on occassion and shown kids some interesting things you can do with one, for example, turning a Kindle into a cycling computer. But the functionality of the Raspberry Pi doesn’t impress these kids, who just compare them with their smartphones. And the inner workings of the RasPi are as inaccessible to them as the inner workings of their smartphones. So it’s not like you can use a RasPi to help them grasp the basics of digital electronics.

The one experience I had using the Raspberry Pi to teach coding was disastrous. While there were multiple reasons for things not going well, one was that the organizer wanted to have the kids “build their own computers,” which amounted to putting a Raspberry Pi into a case and attaching it to a diminutive keyboard and screen. Yes, kids figured out how to do that quickly enough, but that provided them with a computer that was ill suited for much of anything, especially for learning coding.

So I worry that the recent merger just glosses over the fact that teaching kids to code and putting awesome single-board computers into the hands of makers are really two different exercises. I’m sure Eben Upton and lots of professional educators will disagree with me. But as I see things, channeling fledgling coders into using a Raspberry Pi to learn to program computers is counterproductive, despite surface indications that this is what we should be doing. And to my mind, the recent merger only promises to spread the misperception.

In the Future, Machines Will Borrow Our Brain’s Best Tricks

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Rigetti Launches Full-Stack Quantum Computing Service and Quantum IC Fab

Much of the ongoing quantum computing battle among tech giants such as Google and IBM has focused on developing the hardware necessary to solve impossible classical computing problems. A Berkeley-based startup looks to beat those larger rivals with a one-two combo: a fab lab designed for speedy creation of better quantum circuits and a quantum computing cloud service that provides early hands-on experience with writing and testing software.

Rigetti Computing recently unveiled its Fab-1 facility, which will enable its engineers to rapidly build new generations of quantum computing hardware based on quantum bits, or qubits. The facility can spit out entirely new designs for 3D-integrated quantum circuits within about two weeks—much faster than the months usually required for academic research teams to design and build new quantum computing chips. It’s not so much a quantum computing chip factory as it is a rapid prototyping facility for experimental designs.

“We’re fairly confident it’s the only dedicated quantum computing fab in the world,” says Andrew Bestwick, director of engineering at Rigetti Computing. “By the standards of industry, it’s still quite small and the volume is low, but it’s designed for extremely high-quality manufacturing of these quantum circuits that emphasizes speed and flexibility.”

But Rigetti is not betting on faster hardware innovation alone. It has also announced its Forest 1.0 service that enables developers to begin writing quantum software applications and simulating them on a 30-qubit quantum virtual machine. Forest 1.0 is based on Quil—a custom instruction language for hybrid quantum/classical computing—and open-source python tools intended for building and running Quil programs.

By signing up for the service, both quantum computing researchers and scientists in other fields will get the chance to begin practicing how to write and test applications that will run on future quantum computers. And it’s likely that Rigetti hopes such researchers from various academic labs or companies could end up becoming official customers.

“We’re a full stack quantum computing company,” says Madhav Thattai, Rigetti’s chief strategy officer. “That means we do everything from design and fabrication of quantum chips to packaging the architecture needed to control the chips, and then building the software so that people can write algorithms and program the system.”

Much still has to be done before quantum computing becomes a practical tool for researchers and companies. Rigetti’s approach to universal quantum computing uses silicon-based superconducting qubits that can take advantage of semiconductor manufacturing techniques common in today’s computer industry. That means engineers can more easily produce the larger arrays of qubits necessary to prove that quantum computing can outperform classical computing—a benchmark that has yet to be reached.

Google researchers hope to demonstrate such “quantum supremacy” over classical computing with a 49-qubit chip by the end of 2017. If they succeed, it would be an “incredibly exciting scientific achievement,” Bestwick says. Rigetti Computing is currently working on scaling up from 8-qubit chips.

But even that huge step forward in demonstrating the advantages of quantum computing would not result in a quantum computer that is a practical problem-solving tool. Many researchers believe that practical quantum computing requires systems to correct the quantum errors that can arise in fragile qubits. Error correction will almost certainly be necessary to achieve the future promise of 100-million-qubit systems that could perform tasks that are currently impractical, such as cracking modern cryptography keys.

Though it may seem like quantum computing demands far-off focus, Rigetti Computing is complementing its long-term strategy with a near-term strategy that can serve clients long before more capable quantum computers arise. The quantum computing cloud service is one example of that. The startup also believes a hybrid system that combines classical computing architecture with quantum computing chips can solve many practical problems in the short term, especially in the fields of machine learning and chemistry. What’s more, says Rigetti, such hybrid classical/quantum computers can perform well even without error correction.

Tech TalkComputingHardware Global Race Toward Exascale Will Drive Supercomputing

For the first time in 21 years, the United States no longer claimed even the bronze medal. With this week’s release of the latest Top 500 supercomputer ranking, the top three fastest supercomputers in the world are now run by China (with both first and second place finishers) and Switzerland. And while the supercomputer horserace is spectacle enough unto itself, a new report on the supercomputer industry highlights broader trends behind both the latest and the last few years of Top500 rankings.

The report, commissioned last year by the Japanese national science agency Riken, outlines a worldwide race toward exascale computers in which the U.S. sees R&D spending and supercomputer talent pools shrink, Europe jumps into the breach with increased funding, and China pushes hard to become the new global leader, despite a still small user and industry base ready to use the world’s most powerful supercomputers.

Steve Conway, report co-author and senior vice president of research at Hyperion, says the industry trend in high-performance computing is toward laying groundwork for pervasive AI and big data applications like autonomous cars and machine learning. And unlike more specialized supercomputer applications from years past, the workloads of tomorrow’s supercomputers will likely be mainstream and even consumer-facing applications.

“Ten years ago the rationale for spending on supercomputers was primarily two things: national security and scientific leadership, and I think there are a lot of people who still think that supercomputers are limited to problems like will a proton go left or right,” he says. “But in fact, there’s been strong recognition [of the connections] between supercomputing leadership and industrial leadership.”

“With the rise of big data, high-performance computing has moved to the forefront of research in things like autonomous vehicle design, precision medicine, deep learning, and AI,” Conway says. “And you don’t have to ask supercomputing companies if this is true. Ask Google and Baidu. There’s a reason why Facebook has already bought 26 supercomputers.”

As the 72-page Hyperion report notes, “IDC believes that countries that fail to fund development of these future leadership-class supercomputers run a high risk of falling behind other highly developed countries in scientific innovation, with later harmful consequences for their national economies.” (Since authoring the report in 2016 as part of the industry research group IDC, its authors this year formed the spin-off research firm Hyperion.)

Conway says that solutions to problems plaguing HPC systems today will be found in consumer electronics and industry applications of the future. So while operating massively parallel computers with multiple millions of cores may today only be a problem facing the world’s fastest and second-fastest supercomputers—China’s Sunway TaihuLight and Tianhe-2, running on 10.6 and 3.1 million cores, respectively—that fact won’t hold true forever. However, because China is the only country tackling this problem now means they are more likely to develop the technology first, technology that the world will want when cloud computing with multiple millions of cores approaches the mainstream.

The same logic applies to optimizing the ultra-fast data rates that today’s top HPC systems use and minimizing the megawatt electricity budgets they consume. And as the world’s supercomputers approach the exascale, that is, the 1 exaflop or 1000 petaflop mark, new challenges will no doubt arise too.

So, for instance, the report says that rapid shut-down and power-up of cores not in use will be one trick supercomputer designers use to trim back some of their systems’ massive power budgets. And, too, high-storage density—in the 100 petabyte range—will become paramount to house the big datasets the supercomputers consume.

“You could build an exascale system today,” Conway says. “But it would take well over 100 megawatts, which nobody’s going to supply, because that’s over a 100 million dollar electricity bill. So it has to get the electricity usage under control. Everybody’s trying to get it in the 20 to 30 megawatts range. And it has to be dense. Much denser than any computing today. It’s got to fit inside some kind of building. You don’t want the building to be 10 miles long. And also the denser the machine, the faster the machine is going to be too.”

Conway predicts that these and other challenges will be surmounted, and the first exaflop supercomputers will appear on the Top500 list around 2021, while exaflop supercomputing could become commonplace by 2023.

A Search Engine for the Brain Is in Sight

The human brain is smaller than you might expect: One of them, dripping with formaldehyde, fits in a single gloved hand of a lab supervisor here at the Jülich Research Center, in Germany.

Soon, this rubbery organ will be frozen solid, coated in glue, and then sliced into several thousand wispy slivers, each just 60 micrometers thick. A custom apparatus will scan those sections using 3D polarized light imaging (3D-PLI) to measure the spatial orientation of nerve fibers at the micrometer level. The scans will be gathered into a colorful 3D digital reconstruction depicting the direction of individual nerve fibers on larger scales—roughly 40 gigabytes of data for a single slice and up to a few petabytes for the entire brain. And this brain is just one of several to be scanned.

Neuroscientists hope that by combining and exploring data gathered with this and other new instruments they’ll be able to answer fundamental questions about the brain. The quest is one of the final frontiers—and one of the greatest challenges—in science.

Imagine being able to explore the brain the way you explore a website. You might search for the corpus callosum—the stalk that connects the brain’s two hemispheres—and then flip through individual nerve fibers in it. Next, you might view networks of cells as they light up during a verbal memory test, or scroll through protein receptors embedded in the tissue.

Even Ordinary Computer Users Could Access Secret Quantum Computing

You may not need a quantum computer of your own to securely use quantum computing in the future. For the first time, researchers have shown how even ordinary classical computer users could remotely access quantum computing resources online while keeping their quantum computations securely hidden from the quantum computer itself.

Tech giants such as Google and IBM are racing to build universal quantum computers that could someday analyze millions of possible solutions much faster than today’s most powerful classical supercomputers. Such companies have also begun offering online access to their early quantum processors as a glimpse of how anyone could tap the power of cloud-based quantum computing. Until recently, most researchers believed that there was no way for remote users to securely hide their quantum computations from prying eyes unless they too possessed quantum computers. That assumption is now being challenged by researchers in Singapore and Australia through a new paper published in the 11 July issue of the journal Physical Review X.

“Frankly, I think we are all quite surprised that this is possible,” says Joseph Fitzsimons, a theoretical physicist for the Centre for Quantum Technologies at the National University of Singapore and principal investigator on the study. “There had been a number of results showing that it was unlikely for a classical user to be able to hide [delegated quantum computations] perfectly, and I think many of us in the field had interpreted this as evidence that nothing useful could be hidden.”

The technique for helping classical computer users hide their quantum computations relies upon a particular approach known as measurement-based quantum computing. Quantum computing’s main promise relies upon leveraging quantum bits (qubits) of information that can exist as both 1s and 0s simultaneously—unlike classical computing bits that exist as either 1 or 0. That means qubits can simultaneously represent and process many more states of information than classical computing bits.

In measurement-based quantum computing, a quantum computer puts all its qubits into a particular state of quantum entanglement so that any changes to a single qubit affect all the qubits. Next, qubits are individually measured one by one in a certain order that specifies the program being run on the quantum computer. A remote user can provide step-by-step instructions for each qubit’s measurement that encode both the input data and the program being run. Crucially, each measurement depends on the outcome of previous measurements.

Fitzsimons and his colleagues figured out how to exploit this step-wise approach to quantum computing and achieve a new form of “blind quantum computation” security. They showed how remote users relying on classical computers can hide the meaning behind each step of the measurement sequence from the quantum computer performing the computation. That means the owner of the quantum computer cannot tell the role of each measurement step and which qubits were used for inputs, operations, or outputs.

The Real Future of Quantum Computing?

Instead of creating quantum computers based on qubits that can each adopt only two possible options, scientists have now developed a microchip that can generate “qudits” that can each assume 10 or more states, potentially opening up a new way to creating incredibly powerful quantum computers, a new study finds.

Classical computers switch transistors either on or off to symbolize data as ones and zeroes. In contrast, quantum computers use quantum bits, or qubits that, because of the bizarre nature of quantum physics, can be in a state of superposition where they simultaneously act as both 1 and 0.

The superpositions that qubits can adopt let them each help perform two calculations at once. If two qubits are quantum-mechanically linked, or entangled, they can help perform four calculations simultaneously; three qubits, eight calculations; and so on. As a result, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the known universe, solving certain problems much faster than classical computers. However, superpositions are extraordinarily fragile, making it difficult to work with multiple qubits.

Most attempts at building practical quantum computers rely on particles that serve as qubits. However, scientists have long known that they could in principle use qudits with more than two states simultaneously. In principle, a quantum computer with two 32-state qudits, for example, would be able to perform as many operations as 10 qubits while skipping the challenges inherent with working with 10 qubits together.

Now scientists have for the first time created a microchip that can generate two entangled qudits each with 10 states, for 100 dimensions total, more than what six entangled qubits could generate. “We have now achieved the compact and easy generation of high-dimensional quantum states,” says study co-lead author Michael Kues, a quantum optics researcher at Canada’s National Institute of Scientific Research, or INRS, its French acronym, in Varennes, Quebec.

The researchers developed a photonic chip fabricated using techniques similar to ones used for integrated circuits. A laser fires pulses of light into a micro-ring resonator, a 270-micrometer-diameter circle etched onto silica glass, which in turn emits entangled pairs of photons. Each photon is in a superposition of 10 possible wavelengths or colors.

“For example, a high-dimensional photon can be red and yellow and green and blue, although the photons used here were in the infrared wavelength range,” Kues says. Specifically, one photon from each pair spanned wavelengths from 1534 to 1550 nanometers, while the other spanned from 1550 to 1566 nanometers.

Western Digital WD1402A UART

Gordon Bell is famous for launching the PDP series of minicomputers at Digital Equipment Corp. in the 1960s. These ushered in the era of networked and interactive computing that would come to full flower with the introduction of the personal computer in the 1970s. But while minicomputers as a distinct class now belong to the history books, Bell also invented a lesser known but no less significant piece of technology that’s still in action all over the world: The universal asynchronous receiver/transmitter, or UART.

UARTs are used to let two digital devices communicate with each other by sending bits one at a time over a serial interface without bothering the device’s primary processor with the details.

Today, more sophisticated serial setups are available, such as the ubiquitous USB standard, but for a time UARTs ruled supreme as the way to, for example, connect modems to PCs. And the simple UART still has its place, not least as the communication method of last resort with a lot of modern network equipment.

The UART was invented because of Bell’s own need to connect a Teletype to a PDP-1, a task that required converting parallel signals into serial signals. He cooked up a circuit that used some 50 discrete components. The idea proved popular and Western Digital, a small company making calculator chips, offered to create a single-chip version of the UART. Western Digital founder Al Phillips still remembers when his vice president of engineering showed him the Rubylith sheets with the design, ready for fabrication. “I looked at it for a minute and spotted an open circuit,” Phillips says. “The VP got hysterical.” Western Digital introduced the WD1402A around 1971, and other versions soon followed.

Low-Cost Pliable Materials Transform Glove Into Sign-to-Text Machine

Researchers have made a low-cost smart glove that can translate the American Sign Language alphabet into text and send the messages via Bluetooth to a smartphone or computer. The glove can also be used to control a virtual hand.

While it could aid the deaf community, its developers say the smart glove could prove really valuable for virtual and augmented reality, remote surgery, and defense uses like controlling bomb-diffusing robots.

This isn’t the first gesture-tracking glove. There are companies pursuing similar devices that recognize gestures for computer control, à la the 2002 film Minority Report. Some researchers have also specifically developed gloves that convert sign language into text or audible speech.

What’s different about the new glove is its use of extremely low-cost, pliable materials, says developer Darren Lipomi, a nanoengineering professor at the University of California, San Diego. The total cost of the components in the system reported in the journal PLOS ONE cost less than US $100, Lipomi says. And unlike other gesture-recognizing gloves, which use MEMS sensors made of brittle materials, the soft stretchable materials in Lipomi’s glove should make it more robust.

The key components of the new glove are flexible strain sensors made of a rubbery polymer. Lipomi and his team make the sensors by cutting narrow strips from a super-thin film of the polymer and coating them with conductive carbon paint.

Then they use a stretchy glue to attach nine sensors on the knuckles of an athletic leather glove, two on each finger and one on the thumb. Thin, stainless steel threads connect each sensor to a circuit board attached at the wrist. The board also has an accelerometer and a Bluetooth transmitter.