logo

Friday, June 22, 2018 – Machine Learning Today and Tomorrow: Technology, Circuits and System View

Abstract: Machine learning has emerged and continues to evolve as an extremely rich and impactful driver of integrated systems. Given the rapid progression both in algorithmic complexity and application platforms, all signs are that technology and circuit designers will play a critical role in maximizing the impact today and in the future that machine-learning systems can have. To become proactive drivers of the advances that will be made in this area, these communities must understand the complex and dynamic landscape of machine-learning algorithms, their uses, and the technological trends driving their progression. This session aims to provide this background, as well as the preparation needed to engage in the future of machine-learning systems.

DOWNLOAD 2018 Friday Forum Slides (Password Required)

Schedule

Abstract: Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance and big data analytics); in other applications, the goal is to take immediate action based the data (e.g., robotics and autonomous vehicles). In this talk, we will give an overview of various forms of machine learning and discuss their strengths and weaknesses — both from an algorithm design and hardware complexity perspective.

Vivienne Sze/MIT

Abstract: Machine learning systems are being deployed across billions of edge devices and in datacenter infrastructure distributed across the planet. This talk seeks to address several questions from a system design perspective. What are the key similarities and differences between machine learning applications at the edge and in the cloud? How do we co-design our systems across the computing stack to ensure success at scale? And how do we even measure success? The talk concludes by identifying several open research problems in this space.

David Brooks – Harvard University/Facebook

Abstract: Deep Learning allows to rapidly develop new innovative applications with rather good performances thanks to a variety of tools and hardware that will be rapidly presented. But some considerations should be observed to improve the performances and avoiding to be confronted with biased answers and with solutions that will not be acceptable for various reasons related to cost, safety, privacy or ethics. Moreover, this talk will also present emerging trends that use Machine Learning approaches to generate Machine Learning solutions optimized for particular applications. Will the next generation hardware and software be automatically generated?

Dr. Denis Dutoit – CEA-LETI

Abstract: Machine learning algorithms have been successfully deployed in cloud-centric applications and on digital platforms such as FPGAs and GPUs. However, with the growing need for always-on operation, it is attractive to consider solutions that are tightly coupled to the sensor front-end and exploit mixed-signal techniques to reduce the data volume close to its source. In addition, mixed-signal computing can help lower the energy consumption of small-scale machine learning macros that serve as wake-up triggers for more powerful companion algorithms. Motivated by these opportunities, this talk will cover examples on feature extraction for image and audio processing, as well as mixed-signal circuits for convolutional neural networks.

Boris Murmann – Stanford

B. DeSalvo – CEA-LETI

J. Deguchi – Toshiba Memory

Naresh Shanbhag – UIUC

Abstract: The fundamental energy-latency-accuracy trade-off in decision-making inference systems is dominated by the processor-memory interface (memory wall). The Deep In-Memory Architecture (DIMA) breaches this wall by reading functions of multiple bits per precharge cycle, embedding row and column pitch-matched analog computations in the periphery of the bitcell array, and by managing the SNR of these computations. In doing so, DIMA generates an inference result per read cycle instead of a data word leading to at least an order-of-magnitude reduction in the energy-delay product. This talk will describe DIMA principles, design challenges and their solutions, via examples of recent IC prototypes.

Naresh Shanbhag – UIUC

Abstract: Resistive random-access memory (RRAM) is a memory technology that promises high-capacity, non-volatile data storage, low voltages, fast programming and reading time, single bit alterability, and easiness of integration in the Back-End-Of-Line of advanced CMOS logic. This will revolutionize traditional memory hierarchy and facilitate the implementation of in-memory computing architectures and Deep Learning accelerators. To further improve the connectivity between memory arrays and computing, a combination of logic 3D Sequential Integration (3DSI) and memory arrays is a promising solution. RRAMs are also promising candidates for implementing energy-efficient bioinspired synapses, creating a path towards online real time unsupervised learning and life-long learning abilities.

Barbara de Salvo – CEA-LETI

Abstract: Machine learning technology centered on deep learning has dramatically progressed over these years and its various applications have emerged over these years with new applications appearing at every moment. In this talk, I will introduce the recent progress of applications of machine learning for solving real world problems; autonomous driving, industrial robotics, and life science. I will also discuss the issues found in these applications as well as their and future directions found in these applications.

Daisuke Okanohara – Preferred Networks

Abstract: In this talk, I will present our end to end learning approach on deep neural network models for humanoid robot systems. The first topic is the model for the multi-modal integration. The model consists of a convolution neural model and a recurrent neural model which enable a humanoid robot to manipulate the various objects including soft materials. The second topic is a linguistic communication model for robots using a sequence-to-sequence learning with a recurrent neural model. The model achieves immediate and repeatable response to linguistic directions. The future problems for the robot application will be discussed.

Tetsuya Ogata – Waseda University/AIST

Satellite Workshop

2018 Silicon Nanoelectronics Workshop

will be co-located with the Symposia on Sunday and Monday, June 17-18, 2018 at the Hilton Hawaiian Village.

2018 Spintronics Workshop on LSI 

will be co-located with the Symposia at a date to be announced.