Workshop 6

* If you encounter menus do not work upon clicking, delete your browser's cache.

3D Image Sensor

Organizer : Dongjae Shin (Samsung Advanced Institute of Technology)

In recent years, 3D image sensors based on various architectures have been developed and presented, driven by ever increasing demands in various applications. From autonomous driving to augmented/virtual reality(AR/VR), 3D image sensors not only add new functionality to existing products, but also open up new applications and products. CMOS technology remains at the core of 3D image sensors, as with 2D image sensors, but with distinct technological differences in architecture and materials/processes. In the 3D sensor architecture, active illumination technology is as important as sensing technology. The combination of active illumination and associated sensing, presents various architectures that have not been adopted or required by 2D sensors and greatly expands the technology demands to realize such systems. Regarding materials and processes, laser/LED-based active illumination inevitably introduces exotic III/V challenges. Single-photon avalanche diodes (SPADs), acting as a receiver pixel, can present process challenges and, at both the transmit and receiver elements, 3D stacking presents its own unique set of challenges. Technical discussions are required to introduce, assemble and integrate exotic materials and processes into the framework of existing CMOS technology. Owing to this technological diversity, emerging approaches to the 3D image sensors are largely outside the scope of the 2D image sensors community, requiring comprehensive and dedicated technical discussions at industry and academic levels. This 3D image sensor workshop invites experts from related industries and academia to discuss the challenges and different proposed solutions that enable 3D sensing.|This workshop invites three groups of experts from the communities of users, developers and academia. The user group for autonomous vehicles and AR/VR discusses application-specific sensor utilization and deployment scenarios, technology requirements and application evolution vision. The developer group from Indirect Time-of-Flight(iTOF), Direct TOF(dTOF), and Frequency Modulated Continuous-Wave(FMCW) discusses the architecture, up-to-date status and future evolution of each technology. The academic group discusses overall technology trends and long-term vision for key technologies. This workshop features a line-up of speakers representing key technologies in the field of 3D image sensors, aiming to share the latest developments across the community and spark debate on pathfinding for the field which faces significant technology and device architecture diversity.

About Dongjae Shin

Dongjae Shin is a principal researcher at Samsung Advanced Institute of Technology(SAIT), and is currently working on silicon photonics to leverage the silicon infrastructure of Samsung for emerging applications including 3D image sensors and optical interconnects. Since 2002, he has been with Samsung for silicon photonics and optical communication R&D. Prior to Samsung, he was with Bell Labs, NJ for optical crossconnect R&D. He has two decades of industrial R&D experiences with 1 book, 75+ papers, and 125+ patents on LiDAR, DRAM I/O, WDM-PON, VLC, and near-field optics. Dongjae received the B.S., M.S., and Ph.D. degrees in physics from Korea Advanced Institute of Science and Technology(KAIST), Korea, in 1995, 1997, and 2001 respectively. He has been an IEEE senior member since 2012.

Presentations

1. Role and Function of LiDAR Sensor for Autonomous Driving: Beyond Autonomous Driving Level 3

Abstract:
In accordance with the evolution of electronic components in autonomous, automobiles could realize HDP (Highway Driving Pilot) functions beyond HDA (Highway Driving Assist), which are driver assistance functions. Therefore, it is necessary to secure lane maintenance performance that allows hand-off function and improve short-distance cut-in performance. Automotive lidar sensors play an important role in obtaining data necessary for recognizing and discriminating objects required for autonomous driving. In particular, since higher sensing performance is required to implement autonomous driving level 4 or higher, coverage and sensing distance according to the number and mounting location of lidar are all the more important. In order to realize the ultimate Level 4 or higher autonomous driving function in the coming future, lidar should increase the freedom of the mounting location through miniaturization. Also at the same time manufacturing process and material It should be possible to achieve cost reduction across all divisions such as assembly. Through this advance, we expect increase the number of lidar installations for the high quality cocoon sensing. To this end, technological innovation in the device unit, such as integration and high-efficiency technology, and silicon photonics technology at the semiconductor level including transmitting and receiving devices, is required.

2. 3D Sensing Technologies for Immersive Experiences in the Metaverse

Abstract:
Virtual Reality (VR) and Augmented Reality (AR) are fast-evolving fields which are expected to change the ways humans interact with each other and the ways they will access information in the coming years. Market forecasts estimate that the volumes of VR and later of AR systems will grow exponentially in the coming decade and beyond. The immersive experiences at the center of these applications require an accurate sensing of the environment, while consuming ultralow power and occupying a tiny footprint. This talk will begin with an overview of Virtual Reality and Augmented Reality systems, and the special challenges they present to the imaging architect. Next, we will provide an overview of four 3D sensing modalities – structured light, stereo imaging, indirect Time-of-Flight (iToF), and direct Time-of-Flight (dToF). We will describe the basic operating principles of each modality with an emphasis on how these principles affect the fit of these technologies for future VR and AR systems. We will conclude with a deeper discussion of the device- and package-level challenges.

 

3. Integrated LiDAR Sensors for L4 Autonomous Vehicles

Abstract:
Currently long-range direct time-of-flight lidar for fully (L4-L5) autonomous vehicles are optomechanical systems containing custom discrete sensor components and using one or more axes of mechanical scanning. Solid state 3D lidar sensing has recently been introduced into mobile consumer electronics and some L2-L3 driver assistance systems. The small size, low cost, and mass production of these sensors was enabled by integration of sub-ns time-resolved photon counting arrays with integrated digital aggregation and readout ICs. With large markets, these sensors are expected to evolve rapidly, as we’ve seen in the past with consumer sensor products such as CMOS cameras. At what point could the integrated sensors be used for L4 autonomous vehicles, and what are the performance gaps that need to be addressed? Through our development of the Waymo Driver, Waymo has developed a deep understanding of the requirements for the Lidar component of our sensor system. We use a short range wide field of view lidar for maneuvering in dense city environments, and a long range lidar for high speed driving. The requirements of both these systems are currently beyond the capabilities of existing digital photon counting sensors. We will discuss these gaps and the path we see to transitioning to integrated photon counting sensors as they evolve, beginning with the short range perimeter lidar system.

4. Indirect-ToF System for Non-Mobile Application

Abstract:
Recently, indirect time-of-flight is widely used for standalone sensing systems with high resolution for AR/VR and Robotics systems such as gesture detection, space modeling, and autonomous driving applications. These devices require weight reduction through miniaturization, heat reduction and operating time increase through low power consumption, and motion artifact minimization as the most important features. In this presentation, we introduce the development direction of i-ToF through the development of pixel technology, design technology, and system implementation technology. Pixel is the most important factor that determines the operating distance and resolution. Pixel pitch is shrinking from 7um to 3.5um and 38% QE @ 940nm was achieved to improve performance. From the circuit design, motion artifact was minimized by enhancing the readout FPS, and user convenience was expanded by embedding an interference compensation algorithm for multiple devices. The high power consumption due to the increase in computation was reduced by embedding the ISP inside the sensor, and as a result, power was reduced by about 70%, and depth output at 60 fps was made possible. 3D information will continue to be essential information for discovering new device types and applications. In order to realize this, the sensor will take charge of many technical areas, and it will be used as a more realistic application for users through system optimization.

5. Time of Flight 3D-Sensing Architectures

Abstract:
This talk will introduce the basic concept of both direct and indirect time-of-flight before detailing the single-photon avalanche diode and fast-photodiode pixels that underpin these techniques, respectively. Pixel details will include a review of operation, key performance indicators and details of the underlying CMOS technologies that enable the state-of-the-art (TCAD, BSI, 3D-wafer stacking…). Moving to system level, details of range extraction techniques (histogramming, phase binning) will be presented along with examples of full system integration and full ranging performance, based on STMicroelectronics SPAD-based multi-zone DTOF products and our recently presented fast-photodiode-based, high-resolution iTOF solution.

6. Silicon-Based FMCW Imaging for Human-Like Vision

Abstract:
FMCW technology enables a new generation of imaging solutions that can directly capture images of motion to create 4 dimensional pictures and movies. Integration of this technology in a compact silicon chip and its co-packaging with a conventional CMOS image sensor can enable a generation of cameras that work like the human eye. This can fuel the next industrial age by bringing AI and Robotics technologies together to enable the deployment of machines in unpredictable and unstructured environments to allow their participation in our society and economic growth.