logo

* If you encounter menus do not work upon clicking, delete your browser’s cache.

Monday, June 10, 2019 (Suzaku II)

Opportunities and Challenges at the Intersection of Security and AI

Organizers:
Masanori Hashimoto, Osaka Univ.
Xin Zhang, IBM
Keiichi Maekawa, Renesas Electronics Corp.
Nirmal Ramaswamy, Micron Technology Inc.

As AI enjoys rapid progress in recent years, its security implications are also attracting increased attention. This short course surveys the technology, architecture, and circuit foundations behind AI and security, and provides an outlook on their interactions.

Abstract:
Although artificial intelligence (AI) has been a goal of computing for over fifty years, it is only in the last decade that AI systems, and especially those based on machine learning, have been able to beat human beings at complex tasks such as image recognition and playing Go. Applications for AI continue to be developed, and while their success is impressive, they also highlight the need for more computational power, with challenges such as autonomous driving and context-sensitive speech recognition proving more difficult than originally anticipated. These challenges in turn show the need for new circuits and new technologies to enable even more powerful AI. AI systems also highlight the need for more secure computation. With more confidential information than ever online, it is more vital than ever to protect it, and to understand what is being done with it. Information can leak from hardware via side channels and from software via hacking. AI can help, by identifying security flaws, but is also subject to manipulation and hidden biases. This talk provides an introduction to the challenges of AI and security, setting the stage for the remainder of the short course.

About Rob Aitken

Rob Aitken is an ARM Fellow and technology lead for ARM Research. He is responsible for technology direction of ARM research, including identifying disruptive technologies, monitoring the global technology landscape, and coordinating research efforts within and outside of ARM. His research interests include emerging technologies, memory design, design for variability, resilient computing, and statistical design. He has published over 80 technical papers, on a wide range of topics including impacts of technology scaling, statistics of memory bit cell variability and design for reliability. He holds over 30 US patents. Dr. Aitken joined ARM as part of its acquisition of Artisan Components in 2004. Prior to Artisan, he worked at Agilent and Hewlett-Packard. He holds a Ph.D. from McGill University in Canada, is an IEEE Fellow, and serves on a number of conference and workshop committees, including serving as General Chair for the 2019 Design Automation Conference.

Abstract:
Recently, Deep Learning is changing not only the technology paradigm in electronics but also the society itself with Artificial Intelligence technologies.
In this lecture, firstly, the status of AI and DNN SoCs will be reviewed from two perspectives; the data-center oriented and the mobile and embedded AIs. This dichotomy shows clearly the possible application areas for the emerging future AIs. Especially, mobile and embedded deep learning hardware, CNPU, DNPU and UNPU will be introduced together with CNN (Convolutional Neural Network) and RNN (Recurrent Neural Network). In addition, their high efficiency and flexibility with “Dynamically Reconfigurable Processor” architecture will be explained in detail with the real chip measurement results.
Secondly, KAIST’s approach integrating both sides of brain, right-brain for “approximation and adaptation hardware” and left-brain for “precise and programmable Von Neumann architecture”, will be explained with novel design methodology. The deep neural networks and the specialized intelligent hardware (mimicking right brain) capable of statistical processing or learning and the multi-core processors (mimicking left brain) performing the precise computations including software AI are integrated on the same SoC.

About Hoi-Jun Yoo

Hoi-Jun Yoo is the KAIST ICT Endowed Chair Professor, School of Electrical Engineering, KAIST. He was the VCSEL pioneer in Bell Communications Research at Red Bank, NJ. USA and Manager of DRAM design group at Hyundai Electronics designing from 1M DRAM to 256M SDRAM. Currently, he is a full professor of Department of Electrical Engineering at KAIST and the director of the System Design Innovation and Application Research Center (SDIA). From 2003 to 2005, he served as the full time Advisor to the Minister of Korean Ministry of Information and Communication for SoC and Next Generation Computing.
His current research interests are Bio Inspired IC Design, Network on a Chip, Multimedia SoC design, Wearable Healthcare Systems, and high speed and low power memory. He has published more than 250 papers, and wrote or edited 5 books, “DRAM Design”(1997, Hongneung), “High Performance DRAM”(1999 Hongneung), “Low Power NoC for High Performance SoC Design”(2008, CRC), “Mobile 3D Graphics SoC”(2010, Wiley), and “BioMedical CMOS ICs”(Co-editing with Chris Van Hoof, 2010, Springer), and many chapters of books.
Dr. Yoo received Order of Service Merit from Korean government in 2011 for his contribution to Korean memory industry, Scientist/Engineer of this month Award from Ministry of Education, Science and Technology of Korea in 2010, Best Scholarship Awards of KAIST in 2011. He also received the Electronic Industrial Association of Korea Award for his contribution to DRAM technology in 1994, Hynix Development Award in 1995, the Korea Semiconductor Industry Association Award in 2002, Best Research of KAIST Award in 2007, and has been co-recipients of ASP-DAC Design Award 2001, Outstanding Design Awards of 2005, 2006, 2007, 2010, 2011, 2014 A-SSCC, Student Design Contest Award of 2007, 2008, 2010, 2011 DAC/ISSCC. He has served as a member of the executive committee of ISSCC, Symposium on VLSI, and A-SSCC. He also served as the IEEE SSCS Distinguished Lecturer (’10-’11) and the TPC chairs of ISSCC 2015, ISWC 2010 and A-SSCC 2008. He is an IEEE Fellow.

Abstract:
AI capabilities have been increasing rapidly, driven by deep learning. As AI functionality has improved, the demand for even greater capabilities has grown. Significant improvements in performance and efficiency are required to enable this growth to continue. Further improvements require architectures and hardware designed expressly for AI; reliance on conventional architectures and scaling is insufficient. AI-optimized accelerators will increasingly be needed to improve system capabilities and power/performance. To enable practical accelerator integration, systems must be architected to easily incorporate heterogeneous components. The networking and memory aspect of these systems must evolve to insure high utilization and efficiency. In this presentation, I will describe these trends, some exemplary innovations that address them, and areas of future research towards architectures and hardware for the AI era.

About Jeffrey L. Burns

Jeffrey L. Burns received his B.S. in Engineering from UCLA, and his M.S. and Ph.D. in Electrical Engineering from U.C. Berkeley. In 1988 he joined the IBM T.J. Watson Research Center and worked in layout automation and processor design. In 1996 he joined the IBM Austin Research Lab where he worked on the first 1 GHz PowerPC and managed the Exploratory VLSI Design group. In 2003 he returned to Watson to co-lead IBM Research’s annual study into the future of IT. He then managed a program exploring a streaming-oriented supercomputer. From mid 2005 until mid 2009 he managed the VLSI Design department. From mid 2009 he was Director, VLSI Systems, and then Director, Systems Architecture and Design, managing activities in VLSI design, design automation, and microprocessor and systems architecture. In early 2018 he became Director, AI Compute, leading a multi-disciplinary research team developing new AI accelerator architectures, chips, and technologies. He is also Director of the IBM Research AI Hardware Center.

Abstract:
Memory has proven a major bottleneck in the development of energy-efficient chips for artificial intelligence (AI) edge devices. Recent nonvolatile memory devices not only serve as memory macros, but also enable the development of nonvolatile logics (nvLogics) and computing-in-memory (CIM) for AI edge chips. In this talk, we will review recent trend of nonvolatile memory. Then, we will examine some of the challenges, circuits-devices-interaction, and recent progress involved in the further development of nonvolatile memory based nvLogics and CIMs for AI Edge chips.

About Meng-Fan Chang

Since 2010, Dr. Chang has authored or co-author more than 45+ top conference papers (including 18 ISSCC, 15 VLSI Symposia, 9 IEDM, and 5 DAC). He has been serving as an associate editor for IEEE TVLSI, IEEE TCAD, and a guest editor of IEEE JSSC. He has been serving on technical program committees for ISSCC, IEDM (Ex-com and MT chair), DAC (sub-com chair), ISCAS (track co-chair), A-SSCC, and numerous international conferences. He has been a Distinguished Lecture (DL) speaker for IEEE Solid-State Circuits Society (SSCS) and Circuits and Systems Society (CASS), technical committee member (Chair-Elect of NG-TC) of CASS, and the administrative committee (AdCom) member of IEEE Nanotechnology Council. He has also been serving as the Program Director of Micro-Electronics Program of Ministry of Since and Technology (MOST) in Taiwan during 2018-2020, Associate Executive Director for Taiwan’s National Program of Intelligent Electronics (NPIE) and NPIE bridge program during 2011-2018. He is the recipient of several prestigious national-level awards in Taiwan, including the Outstanding Research Award of MOST-Taiwan (2018), Outstanding Electrical Engineering Professor Award (2017), Academia Sinica Junior Research Investigators Award (2012) and Ta-You Wu Memorial Award (2011). He is an IEEE Fellow.

Abstract:
Resistive random-access memory (RRAM) devices are two-terminal elements with an inherent memory effect, driven by internal ion distributions within a solid-state switching medium. As a memory device, RRAM is currently being commercialized for embedded memory and stand-alone data storage applications. RRAM arrays are also extensively studied for future in-memory computing and neuromorphic computing applications due to their ability to simultaneously store weights and process information at the same physical locations. In this talk, we will discuss recent progresses in RRAM devices and RRAM-based in-memory and neuromorphic computing systems, from material and device-level understandings to system-level implementations. Prototype circuits based on RRAM networks can already perform tasks such as feature extraction, data clustering and image analysis. Hybrid RRAM/CMOS integration efforts and approaches towards a general in-memory computing system will also be discussed.

About Wei Lu

Wei Lu is a Professor in the Electrical Engineering and Computer Science department at the University of Michigan, and Director of the Lurie Nanofabrication Facility. He received B.S. in physics from Tsinghua University, Beijing, China, in 1996, and Ph.D. in physics from Rice University, Houston, TX in 2003. From 2003 to 2005, he was a postdoctoral research fellow at Harvard University, Cambridge, MA. He joined the faculty of the University of Michigan in 2005. His research interest includes resistive-random access memory (RRAM), memristor-based logic circuits, neuromorphic computing systems, aggressively scaled transistor devices, and electrical transport in low-dimensional systems. To date Prof. Lu has published over 100 journal articles with 20,000 citations and h-factor of 62. He is a recipient of the NSF CAREER award, co-founder of Crossbar, Inc, and an IEEE Fellow.

Abstract:
Hardware security in mobile and embedded systems is drawing much attention in the context of the rapid growth of Internet-of-Things. Due to the easier accessibility, security threats and vulnerabilities for “things” located everywhere are more critical in comparison with PCs and servers in a room. In particular, the threats of side-channel attacks are non-trivial because they can be done by relatively low-cost equipment in a non-destructive manner. In the last few decades, a variety of side-channel attacks have been reported and defeated. Recently, they can be applied to general-purpose embedded systems including AI systems. This talk will start with an overview of researches on side-channel attacks, and then introduce the-state-of-the-art side-channel attacks and countermeasures including a novel reactive countermeasure that makes it possible to prevent all the microprobe-based side-channel attacks.

About Naofumi Homma

Naofumi Homma received the PhD degrees in information sciences from Tohoku University, Sendai, Japan, in 2001. Since 2016, he has been a Professor in the Research Institute of Electrical Communication, Tohoku University. In 2009-2010 and 2016-2017, he was a visiting professor at Telecom ParisTech in Paris, France. His research interests include computer arithmetic, VLSI design methodology, and hardware security. He received a number of awards including the Best Symposium Paper Award at the 2013 IEEE International Symposium on Electromagnetic Compatibility (EMC 2013), the Best Paper Award at the 2014 IACR International Conference on Cryptographic Hardware and Embedded Systems (CHES 2014), and the JSPS Prize in 2018. He served as a Program Co-Chair of 2017 IACR International Conference on Cryptographic Hardware and Embedded Systems (CHES 2017).

Abstract:
Symmetric key block ciphers and high-entropy key/ID generation constitute critical components of content protection and data authentication. While AES has emerged as the de-facto block cipher in secure systems, equivalent geo-specific ciphers like SMS4 (China) and Camellia (Japan) are increasingly being used in IPsec, WAPI and TLS. With products servicing global markets, there is a need to support multiple ciphers, while meeting tight area/energy constraints. In this presentation, we will describe a unified design that leverages polynomial iso-morphism to accelerate AES/SMS4/Camellia using a shared GF(24)2 datapath enabling 29% area savings over separate implementations. Physically Unclonable Functions (PUF) and True Random Number Generators (TRNG) are foundational security primitives underpinning the root of trust across a wide range of computing platforms. Contradictory design strategies to harvest static and dynamic entropies typically necessitate independent PUF and TRNG circuits, adding to design cost and time-to-market. The second part of this presentation will describe a unified static and dynamic entropy generator leveraging a common entropy source for simultaneous PUF and TRNG operation. We will describe a variety of self-calibration techniques used for run-time segregation of array bitcells into PUF and TRNG candidates, along with entropy extraction techniques to maximize TRNG entropy while stabilizing PUF bits to minimize bit-errors across a wide-range of operating conditions.

About Sanu Mathew

Sanu Mathew is a Senior Principal Engineer with the Circuits Research Labs at Intel Corporation, Hillsboro, Oregon, where he heads the security arithmetic circuits research group, responsible for developing special-purpose hardware accelerators for cryptography and security. He received his Ph.D. degree in Electrical and Computer Engineering from State University of New York at Buffalo in 1999. He holds 55 issued patents, has 25 patents pending and has published over 76 conference/journal papers. He is a Fellow of the IEEE.

Abstract:
With the importance of information security increasing daily, the importance of physical layer security is increasing along with upper layer security. In recent years, owing to technological advancements such as higher accuracy and lower cost of measuring instruments, higher processing speed of computers, and increasing storage capacity of digital devices, advanced attacks that were previously difficult to implement have been realized. Furthermore, such threats have now extended to general commercial products as well as military applications and diplomatic domains. Hence, in this talk, I would like to introduce the issue of information security degradation through electromagnetic fields, which is a physical security attack that cannot be easily detected. I would also like to introduce the associated mechanism of security degradation by electromagnetic fields, corresponding countermeasures, and standardization trends.

About Yuichi Hayashi

Yuichi Hayashi received the M.S. and Ph.D. degrees in information sciences from Tohoku University, Sendai, Japan, in 2005 and 2009, respectively. He is currently a Professor of Nara Institute of Science and Technology. His research interests include electromagnetic compatibility and information security. He is the Chair of EM Information Leakage Subcommittee in IEEE EMC Technical Committee 5. Prof. Hayashi has been recognized through many awards and honors including the IEEE International Symposium on Electromagnetic Compatibility Best Symposium Paper Award (2013) and Workshop on Cryptographic Hardware and Embedded Systems Best Paper Award (2014).