Five years ago, Nvidia was best known as a maker of chips to power videogame graphics in PCs. Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence.
The discovery made Nvidia into the preferred seller of shovels for the AI gold rush that’s propelling dreams of self-driving cars, delivery drones and software that plays doctor. The company’s stock-market value has risen 10-fold in three years, to more than $100 billion.
That’s made Nvidia and the market it more-or-less stumbled into an attractive target. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. Further competition comes from large tech companies designing their own AI chips. Google’s voice recognition and image search now run on in-house chips dubbed “tensor processing units,” while the face-unlock feature in Apple’s new iPhone is powered by a home-grown chip with a “neural engine”.
Nvidia’s latest countermove is counterintuitive. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. That module, the DLA for deep learning accelerator, is somewhat analogous to Apple’s neural engine. Nvidia plans to start shipping it next year in a chip built into a new version of its Drive PX computer for self-driving cars, which Toyota plans to use in its autonomous-vehicle program.
Why give away this valuable intellectual property for free? Deepu Talla, Nvidia’s vice president for autonomous machines, says he wants to help AI chips reach more markets than Nvidia can accommodate itself. While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices. “There are going to be hundreds of billions of internet of things devices in the future,” says Talla. “We cannot address all the markets out there.”
One risk of helping other companies build new businesses is that they’ll start encroaching on your own. Talla says that doesn’t concern him because greater use of AI will mean more demand for Nvidia’s other hardware, such as the powerful graphic chips used to train deep learning software before it is deployed. “There’s no good deed that goes unpunished but net-net it’s a great thing because this will increase the adoption of AI,” says Talla. “We think we can rise higher.”
Mi Zhang, a professor at Michigan State University, calls open sourcing the DLA design a “very smart move.” He guesses that while researchers, startups, and even large companies will be tempted by Nvidia’s designs, they mostly won’t change them radically. That means they are likely to maintain compatibility with Nvidia’s software tools and other hardware, boosting the company’s influence.
Zhang says it makes sense that devices beyond cars and robots have much to gain from new forms of AI chip. He points to a recent project in his research group developing hearing aids that used learning algorithms to filter out noise. Deep-learning software was the best at smartly recognizing what to tune out, but the limitations of existing hearing aid-scale computer hardware made it too slow to be practical.
Creating a web of companies building on its chip designs would also help Nvidia undermine efforts by rivals to market AI chips and create their ecosystems around them. In a tweet this week, one Intel engineer called Nvidia’s open source tactic a “devastating blow” to startups working on deep learning chips.
It might also lead to new challenges for Intel. The company bought two such startups in the past year: Movidius, focused on image processing, and Mobileye, which makes chips and cameras for automated driving.