Intel focuses on hardware and programs

Intel focuses on hardware and programs

Intel pulled back the curtain on 15 technical papers on Monday describing its chip research into transforming computing towards attention on data running across the core edge and endpoints of systems.

Intel described the transition as moving from one that focuses on hardware and programs to at least one that focuses more on data and knowledge. Such a change requires greater energy efficiency and more powerful processing, closer to devices where data generated like image sensors, consistent with Vivek De, an Intel fellow and therefore the director of Circuit Technology Research for Intel Labs.

The research promises more efficient computation techniques with promise for various applications like robotics, augmented reality, machine vision and video analytics. At such endpoints and other locations within the movement of knowledge , there are often limits on capacity, memory and power that require overcoming, De said.

Some research could eventually directly applied to production of latest chips, but Intel didn’t share any timeline. “Our research influences what capabilities we elect to include into future products over time,” a spokeswoman said. It covered the presentation of the papers in an Intel blog and announced at the 2020 Symposia of VLSI Technology and Circuits.

Intel Technical

In one among the 15 technical papers, 11 Intel researchers showed the utilization of an all-digital Binary Neural Network (BNN) accelerator chip supported a 10nm design of a FinFET (a fin field-effect transistor) CMOS (complementary metal-oxide-semiconductor). Traditionally, BNNs are analog and not digital in some power-constrained edge devices, but analog BNNs have lower accuracy at making predictions and aren’t as tolerant as digital accelerators in processor variations and noise.

In this research paper, Intel said it had been ready to deliver energy efficiency with digital approaching that of analog in-memory and also provide better scale for advanced processing. It said it reached energy efficiency of 617 trillion operations per second (TOPS) per watt by using Compute Near Memory (CNM), scalar product computer and Near-Threshold Voltage operations.

“The digital BNN design approaches the energy efficiency of analog in-memory techniques while also ensuring deterministic, scalable and precise operation,” the authors wrote. CNM designs increase energy efficiency by interleaving memory sub-arrays and Multiply Accumulate units, the authors said.

Other papers presented included one for doubling the local memory spare capacity for AI , machine learning and deep learning applications and another for reducing the facility needed for a deep learning based video stream analysis.

In the latter of these two papers, De described for Fierce electronics how a chip for event-driven visual processing might use with new algorithms to only process visual inputs supported motion.

For example, a surveillance camera and underlying technology could specialise in two people walking during a large parking zone. The aim of the technology is to supply improved image accuracy while alleviating top computer and memory requirements of visual analytics at the sting.

Also read: Google open source