The enterprise will most certainly become more cloudy and more
intelligent as the decade unfolds, but of the two, the case can be made
that the intelligent technologies under development today will have the
more far-reaching impact over data operations.
Many of these artificially intelligent machine-learning capabilities
are being programmed into silicon, placing them at the foundation of all
the virtual, abstract data architectures that follow. And this is also
leading to an upheaval of sorts for the chip industry as demand for
greater system autonomy and self-service functionality shifts the focus
away from raw power to more nuanced data-handling and coordinated
processing functions.
AMD, for one, is finding that a renewed focus on deep learning and
parallel processing is one of the keys to survival in an increasingly
competitive industry. The company recently teamed up with Google to implement its Radeon GPUs in support of neural networking and other advanced constructs
to drive performance and streamline operations across hyperscale
infrastructure. Starting in 2017, Google plans to launch the FirePro
S9300 x2 GPU to support the Google Compute Engine and Google Cloud
Machine Learning services, according to Forbes. AMD also recently signed
a similar deal with Chinese ecommerce leader Alibaba.
But just as AMD had to butt heads with market-leader Intel in the CPU
space, so too must it now go head-to-head with Nvidia, which is
emerging as a leader in enterprise GPU markets. Digital Trends says the
company recently linked up with IBM to develop deep learning capabilities between the Power8 processor and a range of Nvidia GPUs
through common usage of the NVLink interconnect platform. The system
enables data speeds of up to 80 GBps, which is more than double what
today’s x86 servers enjoy with PCI Express. IBM is looking to implement
the technology for its PowerAI platform that unites several deep
learning frameworks like Caffe and OpenBLAS under a single Ubuntu
package.
Meanwhile, other intelligent platforms are emerging on the
field-programmable gate array (FPGA), which provides for more adaptable
hardware constructs due to their ability to be re-configured after
deployment. Enterprise Tech reports that chip-designer Xilinx recently provided HPC cloud provider Nimbix with a range of analytics, machine-learning and rich media capabilities
under a “reconfigurable acceleration stack” that streamlines
programming for compute-intensive workloads. The setup will allow users
to access a newly reconfigured compiler that supports various OpenCL
frameworks to enable C and C++ kernels that span FPGA, CPU and GPUs
working in tandem. At the same time, a new set of libraries brings in
deep neural network support, as well as a SQL-based compute kernel.
Meanwhile, Infiniband is emerging as a key element in intelligent systems development as well. Mellanox is set to begin shipping a new architecture that pushes throughput to 200 Gbps
with an eye toward accelerating machine learning and other HPC
functions. As noted by Computerworld, the HDR Infiniband platform will
debut on three systems early next year: the ConnectX-6 adapter, the
Quantum switch and the LinkX transceiver. In this way, the system can be
implemented across any combination of CPUs, including Power and ARM
devices, with up to 40 ports of 200 Gb connectivity for a total
switching capacity of 16 Tbps.
Thursday, 17 November 2016
Chip Makers Vie for Machine-Learning Dominance
More Articles
Subscribe to:
Post Comments (Atom)
EmoticonEmoticon