Nvidia on Tuesday announced a new mobile and graphics processor road map, ripping up chip plans it announced one year ago.
Nvidia added a new graphics processor called Pascal, which is a "processor that's essentially a supercomputer," said Jen-Hsun Huang, CEO of Nvidia, during a keynote on Tuesday at the company's GPU Technology Conference, which was webcast from San Jose, California.
The processor, which will ship commercially in 2016, will succeed the current GPU architecture called Maxwell, now being used in the latest PC graphics cards. But missing from the product road map on a conference presentation slide was a GPU code-named Volta, which was announced last year and originally due to succeed Maxwell. Volta was to have 3D memory stacking and unified memory.
Nvidia did not explain why it killed plans for release of the original Volta, but the company has retained the code name.
"Volta will be our next GPU after Pascal," Nvidia spokesman Ken Brown wrote in an email message. "Pascal switched places with [Volta]."
Pascal was moved up because Nvidia had a "unique opportunity to create a new GPU architecture right after Maxwell with three key new technologies," Brown said. Those technologies are 3D memory stacking, shared CPU and GPU memory, and NVLink, an interconnect that provides faster throughput.
Nvidia also introduced a mobile processor code-named Erista, which is due out next year. It will succeed the upcoming Tegra K1 chip and be based on the Maxwell graphics processor. Erista's introduction has led to a change of plans for the release of a chip code-named Parker, which was on last year's road map and originally due to succeed the Tegra K1. The K1 chip will be in mobile devices later this year.
"Erista was moved ahead of Parker. We'll provide further updates later," Brown said.
The NVLink interconnect coming in Pascal is a big advancement in throughput and will be able to scale faster than traditional PCI-Express technology, Huang said. It can provide five times more throughput than the PCI-Express pipes in use today. NVLink will also allow faster access to CPU memory, Huang said.
Bandwidth is becoming a problem as more data gets fed into all types of computing devices, Huang said. Faster connections are needed inside computers to process all the data, and NVLink could help solve that problem.
3D memory stacking is a big step in improving memory bandwidth and building power-efficient chips, Huang said.
Graphics cards are getting too big, and stacking memory chips instead of placing them next to each other is a more efficient use of space, Huang said. Increasing the clock speeds of memory chips would boost performance, too, but that could lead to more power consumption by graphics cards.
The stacked memory chips will be linked through a wire-like connection called Through Silicon Via (TSV), which is also used in emerging Hybrid Memory Cube technology, which stacks DDR3 memory. The TSV link improves throughput for memory transfers.
The unified memory technology in Pascal will make GPU and CPU memories into shared resources. That will increase the amount of memory available in a system. Nvidia already offers CUDA 6 parallel programming tools with memory management features to make GPU memory as readily accessible as CPU memory.
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.