The company has been showing off a prototype computer designed to emulate the way the brain makes calculations. It's based on a new architecture that could define how future computers work.
The brain can be seen as an extremely power-efficient biological computer. Brains take in a lot of data related to sights, sounds and smell, which they have to process in parallel without lagging, in terms of computation speed.
HPE's ultimate goal is to create computer chips that can compute quickly and make decisions based on probabilities and associations, much like how the brain operates. The chips will use learning models and algorithms to deliver approximate results that can be used in decision-making.
HPE is testing its brain-like computing model through a prototype system with circuit boards and memory chips. The computer, which was shown for the first time at the Discover conference held recently in Las Vegas, is designed to operate in a way that the brain’s neurons and synapses work.
HPE's researchers are keen on bringing the parallelism of brain activity to wired circuitry.
"We're mimicking that architecture of parallel computation using our memristor technology and a specially designed architecture," said Cat Graves, scientific researcher at Hewlett Packard Labs.
Memristors are a new type of storage and memory that could help future AI systems understand data and make more use of it. That's different from today's SSDs and DRAM, which just store data. Like with synapses, the learning and retention on memristor circuits are determined by current and data flow characteristics.
In brains, data is stored in specific neurons or cells, and calculations for tasks like image or speech recognition take place in those cells.
"This has the potential to be incredibly more power efficient, save a lot of time, reduce computing complexity and not be clogging up the bandwidth," Graves said.
In traditional computing, data has to leave storage cells for processing via CPU and memory, which can waste valuable computing resources. HPE's architecture is the opposite -- computation takes place in cells where data is stored, like in brains. Then connections are established between cells, much like synapses.
With such a structure, calculations can be highly parallel, Graves said. Such calculations, called "vector matrix multiplications," lie at the heart of computationally intensive algorithms and applications like image filtering, speech recognition and deep-learning systems, Graves said.
The researchers can switch grid setups on the test bed to figure out which configurations work best for different kinds of algorithms.
The researchers managed 8,000 calculations in one clock cycle in one specific memristor setup.
"In a real chip it'll be faster because all the hardware we have on these boards will be integrated into the chip itself," Graves said.
This new chip won't replace general-purpose processors like GPUs or CPUs. Any kind of computation in a neuromorphic chip is still approximate, based on probabilities, and may not be entirely accurate.
"It's not particularly accurate to the kind of precision level you care about for some kind of applications," Graves said. "For your bank transactions, I wouldn't want my approximate math to be used there."
The chip may act as a co-processor that can bring intelligence to a computer on tasks like image or speech recognition.
HPE's approach is different from that of companies like Qualcomm, which has a software-based approach, and IBM, which relies on a different chip architecture.
It's early days of research for HPE, but Graves is excited for the future of intelligent computers that can process data much like human brains. It'll take a while for HPE to build a chip that can mimic brain activity, however.
"While we don't know all the answers yet, we're starting to glean a few useful insights," Graves said.
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.