Nvidia Statement to Los Angeles Times Re: Intel-Nervana 9/22/2017

Artificial intelligence is driving the greatest technology advances known to humankind. From diagnosing skin cancer using a photo to making our roads safer with self-driving cars, AI will automate intelligence and spur a wave of social progress unmatched since the industrial revolution.

Deep learning, a groundbreaking AI approach that creates computer software that learns, has insatiable demand for processing power. AI developers want flexibility in both hardware and software: They want a programmable processor that delivers maximum throughput and efficiency; and they need outstanding software and tools that let them quickly train and deploy their networks.
 
Our deep learning platform – comprising both hardware and software – delivers maximum throughput and efficiency to meet the needs of developers and data scientists.
 
Our latest GPU, Tesla V100, is a programmable processor designed specifically for deep learning. Its 21 billion transistors deliver 120 teraflops of deep learning performance, equivalent to the performance of 100 CPUs. It features 640 dedicated compute units specifically for AI called Tensor Cores.
 
Our CUDA software, used by more than 500,000 developers, provides incredible versatility to develop neural networks. We also offer additional software and libraries that are specific for deep learning, and we provide full support for all deep learning frameworks such as Caffe2, Cognitive Toolkit, MXNet, TensorFlow, Theano, and Torch.
 
Our CUDA GPUs are the most successful parallel processor for deep learning because they deliver a versatile, high performance solution that allows developers and data scientists to realize AI’s life-changing potential.