Monthly Archives: June 2017

Fractal Design Launches New Tempered Glass Define C Chassis

So many cases on the market today are made to be all things to all people. However, for many this results in a chassis full of empty bays, unused mounts and excess bulk. Created for those who demand a flexible platform for a powerful ATX or Micro ATX build that wastes no space, the Define C TG Series is the perfect solution to satisfy this balance of capacity and efficiency while opening up the side thanks to a full tempered glass side panel.

Smaller than the usual ATX and Micro ATX case, the Define C TG and Define Mini C TG with its optimized interior provides the perfect base for users. The open air design offers unobstructed airflow across your core components with high performance and silent computing in mind at every step.

Extensive cooling support via both air and water are offered to make sure even the most powerful systems can be cooled effectively. Carrying signature Define series traits, the Define C TG Series brings with it that iconic front panel design, dense sound dampening material throughout and ModuVent technology in the top panel. Those wanting to remove the ModuVent to add more fans or a radiator can install in its place the new magnetic dust filter and a built in power supply shroud helps offer an unmatched level of cable management.

Our team of engineers in Sweden made sure performance without restrictions was paramount. With innovative design, the Define C TG Series brings your system together in a truly exquisite way, reminding us why we choose Fractal Design.

Define C TG and Define Mini C TG Key Features:

  • Define Series sound dampening with ModuVent™ technology for silent operation in a compact full ATX or Micro ATX form factor
  • Optimized for high airflow and silent computing
  • Tempered glass side panel for a clean looking exterior with full interior visibility
  • Side and front panels are lined with industrial-grade sound dampening material
  • Flexible storage options with room for up to 5 drives
  • Comes with two preinstalled Fractal Design Dynamic X2 GP-12 120 mm fans optimized to deliver maximum airflow while still maintaining a low noise level
  • (Define Mini C TG) Equipped with 5 PCI expansion slots for powerful dual GPU setups.
  • Open air designed interior creates an unobstructed airflow path from the front intake to the rear exhaust
  • Easy-to-clean high airflow nylon filters on the front and base with full PSU coverage and front access for convenience.
  • Includes optional top filter to prevent dust buildup when ModuVent is removed for additional fan slots.
  • Power supply shroud conceals drive cage and excess cabling for an even quieter and cleaner looking interior free of airflow obstructions

Why Hardware Engineers Should Think Like Cybercriminals

The future of cybersecurity is in the hands of hardware engineers. That’s what Scott Borg, director of the U.S. Cyber Consequences Unit, told 130 chief technical officers, engineering directors, and key researchers from MEMS and sensors companies and laboratories Thursday morning.

Borg, speaking at the MEMS and Sensors Technical Congress, held on the campus of Stanford University, warned that “the people in this room are now moving into the crosshairs of cyberhackers in a way that has never happened before.”

And Borg should know. He and his colleagues at the Cyber Consequences Unit (a nonprofit research institute) predicted the Stuxnet attack and some major developments in cybercrime over the last 15 years.

Increasingly, hackers are focusing on hardware, not on software, particularly equipment in industry, he indicated.

“Initially,” he said, “they focused on operations control, monitoring different locations from a central site. Then they moved to process control, including programmable logic controllers and local networks. Then they migrated to embedded devices and the ability to control individual pieces of equipment. Now they are migrating to the actual sensors, the MEMS devices.”

“You can imagine countless attacks manipulating physical things,” Borg said. And imagining those things definitely keeps him up at night—it’s not easy being a cybersecurity guru.

“Yesterday,” he said, while on a tour of a nanofab facility, “I saw tanks full of dangerous chemicals, controlled by computers moving things in and out. I immediately thought about which would be the prevailing direction of wind and how you could rupture the tanks with cyberattack. Whenever I look at an appliance, I think what could be done to it that causes maximum damage and embarrassment.”

The move to attacking hardware, just like any cyberattack, comes because hackers are thinking about the economics, Borg says. Hackers always profit in some way from their attacks, though the gain is not always monetary.

One way hardware hackers can profit by hurting a company can be by taking advantage of the resulting drop in its stock price; stock manipulation is a growth area for cybercrime in general, says Borg.

“There is a limit to how much you can steal from credit card fraud; there is no limit to how much you can make in taking a position in a market and making something happen,” Borg says. “You can short a company’s stock in a highly leveraged way, then attack the company in a way that makes stock fall, reinvest on the way down, and multiply your investment hundreds of times. This is a big growth area for cybercrime; it has been done multiple times already, but it is really just starting to get under way. This is going to be a huge area for cybercriminals.”

It is going to be up to engineers to stop this coming hardware cybercrime wave. And it’s not going to be easy because “engineers aren’t as easy to fool as scientists, but they are still really easy to fool.

“Engineers believe in data, in gauges, in measurements. They are a little less easy to fool than scientists in that they build physical systems that operate, and when they fail, they do have to try to figure out why and what real world effects are. But engineers aren’t used to dealing with unkind adversaries. They believe in statistics, where statistical distributions are normal, where probabilities can deal with independent variables. And statistics doesn’t work in a cyberworld. If you are up against a cunning adversary, who will behave in ways outside of normal, it is hard to use any of the techniques we use in the natural world. A cyberadversary will take advantage of unlikely circumstances.”

But, he said, if engineers, particularly design engineers, learn to understand the cybercriminal and think proactively about cyberattacks, they can often improve cybersecurity and do it for free.

“Increasing security isn’t always about layering on security [to a completed system], but about how you implement a certain function in the first place, and that choice often doesn’t cost more,” Borg says. “Decisions that are made in engineering at really fine-grained levels affect the costs of carrying out a cyberattack. Even a small sensor will have consequences for cybersecurity, not always in the immediate device, but as it develops into a product line.”

Engineers, therefore, need to look at their products from the standpoint of the attacker, and consider how attacker would benefit from cyberattack and how to make undertaking that attack more expensive. It’s all about working to increase an attacker’s costs, he says.

“As we move into embedded controllers and microdevices, we move into a realm that cybersecurity specialists like me haven’t explored that much yet,” he says. “The hackers haven’t explored it yet either,” but, Borg warns, they will.

“You people are now in the crosshairs; [design] decisions you are making will have powerful security implications. They will in some cases wipe out your competitive advantage, or give you a huge one. Nobody can tell you what to do beyond what I’ve told you—that it’s all about the economics,” he says. “All I can do is make you aware of the world we have moved into, to make you aware that you are now in the crosshairs.”

Bad at Math, Good at Everything Else

Painful exercises in basic arithmetic are a vivid part of our elementary school memories. A multiplication like 3,752 × 6,901 carried out with just pencil and paper for assistance may well take up to a minute. Of course, today, with a cellphone always at hand, we can quickly check that the result of our little exercise is 25,892,552. Indeed, the processors in modern cellphones can together carry out more than 100 billion such operations per second. What’s more, the chips consume just a few watts of power, making them vastly more efficient than our slow brains, which consume about 20 watts and need significantly more time to achieve the same result.

Of course, the brain didn’t evolve to perform arithmetic. So it does that rather badly. But it excels at processing a continuous stream of information from our surroundings. And it acts on that information—sometimes far more rapidly than we’re aware of. No matter how much energy a conventional computer consumes, it will struggle with feats the brain finds easy, such as understanding language and running up a flight of stairs.

If we could create machines with the computational capabilities and energy efficiency of the brain, it would be a game changer. Robots would be able to move masterfully through the physical world and communicate with us in plain language. Large-scale systems could rapidly harvest large volumes of data from business, science, medicine, or government to detect novel patterns, discover causal relationships, or make predictions. Intelligent mobile applications like Siri or Cortana would rely less on the cloud. The same technology could also lead to low-power devices that can support our senses, deliver drugs, and emulate nerve signals to compensate for organ damage or paralysis.

But isn’t it much too early for such a bold attempt? Isn’t our knowledge of the brain far too limited to begin building technologies based on its operation? I believe that emulating even very basic features of neural circuits could give many commercially relevant applications a remarkable boost. How faithfully computers will have to mimic biological detail to approach the brain’s level of performance remains an open question. But today’s brain-inspired, or neuromorphic, systems will be important research tools for answering it.

A key feature of conventional computers is the physical separation of memory, which stores data and instructions, from logic, which processes that information. The brain holds no such distinction. Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses). Most of what the brain does is determined by those connections and by the manner in which each neuron responds to incoming signals from other neurons.

When we talk about the extraordinary capabilities of the human brain, we are usually referring to just the latest addition in the long evolutionary process that constructed it: the neocortex. This thin, highly folded layer forms the outer shell of our brains and carries out a diverse set of tasks that includes processing sensory inputs, motor control, memory, and learning. This great range of abilities is accomplished with a rather uniform structure: six horizontal layers and a million 500-micrometer-wide vertical columns all built from neurons, which integrate and distribute electrically coded information along tendrils that extend from them—the dendrites and axons.

Like all the cells in the human body, a neuron normally has an electric potential of about –70 millivolts between its interior and exterior. This membrane voltage changes when a neuron receives signals from other neurons connected to it. And if the membrane voltage rises to a critical threshold, it forms a voltage pulse, or spike, with a duration of a few milliseconds and a value of about 40 mV. This spike propagates along the neuron’s axon until it reaches a synapse, the complex biochemical structure that connects the axon of one neuron to a dendrite of another. If the spike meets certain criteria, the synapse transforms it into another voltage pulse that travels down the branching dendrite structure of the receiving neuron and contributes either positively or negatively to its cell membrane voltage.

Connectivity is a crucial feature of the brain. The pyramidal cell, for example—a particularly important kind of cell in the human neocortex—contains about 30,000 synapses and so 30,000 inputs from other neurons. And the brain is constantly adapting. Neuron and synapse properties—and even the network structure itself—are always changing, driven mostly by sensory input and feedback from the environment.

General-purpose computers these days are digital rather than analog, but the brain is not as easy to categorize. Neurons accumulate electric charge just as capacitors in electronic circuits do. That is clearly an analog process. But the brain also uses spikes as units of information, and these are fundamentally binary: At any one place and time, there is either a spike or there is not. Electronically speaking, the brain is a mixed-signal system, with local analog computing and binary-spike communication. This mix of analog and digital helps the brain overcome transmission losses. Because the spike essentially has a value of either 0 or 1, it can travel a long distance without losing that basic information; it is also regenerated when it reaches the next neuron in the network.

Another crucial difference between brains and computers is that the brain accomplishes all its information processing without a central clock to synchronize it. Although we observe synchronization events—brain waves—they are self-organized, emergent products of neural networks. Interestingly, modern computing has started to adopt brainlike asynchronicity, to help speed up computation by performing operations in parallel. But the degree and the purpose of parallelism in the two systems are vastly different.

Can We Quantify Machine Consciousness?

Imagine that at some time in the not-too-distant future, you’ve bought a smartphone that comes bundled with a personal digital assistant (PDA) living in the cloud. You assign a sexy female voice to the PDA and give it access to all of your emails, social media accounts, calendar, photo album, contacts, and other bits and flotsam of your digital life. She—for that’s how you quickly think of her—knows you better than your mother, your soon-to-be ex-wife, your friends, or your therapist. Her command of English is flawless; you have endless conversations about daily events; she gets your jokes. She is the last voice you hear before you drift off to sleep and the first upon awakening. You panic when she’s off-line. She becomes indispensable to your well-being and so, naturally, you fall in love. Occasionally, you wonder whether she truly reciprocates your feelings and whether she is even capable of experiencing anything at all. But the warm, husky tone of her voice and her ability to be that perfect foil to your narcissistic desires overcome these existential doubts. Alas, your infatuation eventually cools off after you realize she is carrying on equally intimate conversations with thousands of other customers.

This, of course, is the plot of Her, a 2013 movie in which an anodyne Theodore Twombly falls in love with the software PDA Samantha.

Over the next few decades such a fictional scenario will become real and commonplace. Deep machine learning, speech recognition, and related technologies have dramatically progressed, leading to Amazon’s Alexa, Apple’s Siri, Google’s Now, and Microsoft’s Cortana. These virtual assistants will continue to improve until they become hard to distinguish from real people, except that they’ll be endowed with perfect recall, poise, and patience—unlike any living being.

The availability of such digital simulacra of many qualities we consider uniquely human will raise profound scientific, psychological, philosophical, and ethical questions. These emulations will ultimately upend the way we think about ourselves, about human exceptionalism, and about our place in the great scheme of things.

Here we will survey the intellectual lay of the land concerning these coming developments. Our view is that as long as such machines are based on present-day computer architectures, they may act just like people—and we may be tempted to treat them that way—but they will, in fact, feel nothing at all. If computers are built more like the brain is, though, they could well achieve true consciousness.

The faith of our age is faith in the digital computer—programmed properly, it will give us all we wish. Cornucopia. Indeed, smart money in Silicon Valley holds that digital computers will be able to replicate and soon exceed anything and everything that humans are capable of.

But could sufficiently advanced computers ever become conscious? One answer comes from those who subscribe to computationalism, the reigning theory of mind in contemporary philosophy, psychology, and neuroscience. It avers that all mental states—such as your conscious experience of a god-awful toothache or the love you feel for your partner—are computational states. These are fully characterized by their functional relationships to relevant sensory inputs, behavioral outputs, and other computational states in between. That is, brains are elaborate input-output devices that compute and process symbolic representations of the world. Brains are computers, with our minds being the software.

Adherents to computationalism apply these precepts not only to brains and to the behavior they generate but also to the way it feels to be a brain in a particular state. After all, that’s what consciousness is: any subjective feeling, any experience—what we see, hear, feel, remember, think.

Computationalism assumes that my painful experience of a toothache is but a state of my brain in which certain nerve cells are active in response to the infected tooth, leading to my propensity to moan, hold my jaw, not eat on that side of my mouth, inability to focus on other tasks, and so on. If all of these states are simulated in software on a digital computer, the thinking goes, the system as a whole will not only behave exactly like me but also feel and think exactly like me. That is, consciousness is computable. Explicitly or implicitly, this is one of the central tenets held by the digerati in academe, media, and industry.

In this view, there is nothing more to consciousness than the instantiation of the relevant computational states. Nothing else matters, including how the computations are implemented physically, whether on the hardware of a digital computer or on the squishy stuff inside the skull. According to computationalism, a future Samantha—or even better, an embodied example like Ava in the brilliant, dark movie Ex Machina—will have experiences and feelings just as we do. She will experience sights and sounds, pleasure and pain, love and hate.