3D Vision Evolves at CES 2024: An Interview with Orbbec Co-Founder David Chen

Share

Lidar and 3D vision are becoming increasingly important in the automotive field as companies push to adopt advanced driver assistance systems (ADAS) and autonomous driving.

However, there are more use cases coming for 3D vision including in the realm of industrial automation, healthcare, robotics and more. These fields will also require robust vision systems that allow new innovations to emerge.

During this week’s Consumer Electronics Show (CES) 2024, 3D vision is expected to be a big theme. Electronics360 interviewed David Chen, co-founder and head of products at Orbbec, to explore the company’s latest innovations and why 3D vision is becoming a necessity for numerous use cases beyond just automotive.

1. What trends are you expecting to see coming from CES 2024?

David Chen: We are seeing a ground swell in generative AI for easing human centric operations. However, computer vision for autonomous machines requires extremely reliable vision systems. Orbbec’s customers and partners span the emphasis on robotics, healthtech, consumer and retail outlined by the keynote speeches at CES.

Our team is here to enable partners looking to apply vision and AI solutions to drive innovation.

2. What are the key themes for Orbbec at CES 2024?

Chen: Orbbec is dedicated to all things 3D and to serve our customers as a leading provider of accurate, precise and reliable vision systems for robotics, industrial automation, healthcare and other fields. Please visit our booth to witness the innovations from our customers and partners, some of our latest technology and talk to our experts about your challenges. We are also offering ODM services to customers across these industries.

As a trusted partner, we want to enable innovation by delivering tailored vision systems and components that meet the diverse needs of companies pushing boundaries with robotics, automation and other cutting-edge technologies.

3. What are the latest advances in 3D technology that Orbbec is going to announce/demonstrate?

Chen: We’ll be showcasing a series of products because of our collaboration with Microsoft and NVIDIA that feature exceptional reliability, manufacturing and quality. We’ll also be launching our next-generation Gemini 2 XL stereo vision camera, which boasts impressive capabilities. Additionally, we have reference designs and integrated solutions for both developers and enterprise customers, such as the Persee N1. These integrated products make prototype design more convenient for innovators while also serving as ODM/OEM reference models that demonstrate our design and production standards.

4. How important is 3D vision in indoor/outdoor operations to these sectors?

Chen: Regarding the importance of 3D vision technology for indoor and outdoor settings — it comes down to how vision systems handle environmental light and background noise. Our goal is to extract image data and dimensional measurements despite noisy conditions. 3D vision is the most important approach that can reliably achieve this. So, it’s not just a matter of importance — for certain applications, there is no viable alternative.

5. How does the Gemini 2XL fit in Orbbec’s product portfolio? How does it compete in the robotic market?

Chen: Unlike traditional stereo vision systems, the Gemini 2 XL is a 3D camera powered by deep learning. This allows it to produce depth maps of exceptional completeness and noise resilience, even in challenging lighting conditions. Having access to robust positional data unlocks new potential for existing AI-based recognition, tracking and other core robotics functions.

6. At CES 2024, Orbbec will showcase its Persee N1  a 3D camera based on the Nvidia Jetson platform. How will Persee N1 allow users to experiment with 3D vision? How will it create new platforms for these industries?

Chen: Through conversations with developers and innovators, we realized one of the biggest pain points when prototyping new ideas is integrating suitable hardware. Researchers end up wasting countless hours piecing together makeshift platforms rather than evaluating their algorithms.

The Persee N1 tackles this by packaging Nvidia compute with our high-performance 3D imaging into an off-the-shelf solution. Now engineers can simply load their code and immediately test concepts instead of battling build headaches. We believe this “innovation accelerator” removes a major bottleneck in breathing life into new autonomous designs.

7. In 2023, the 3D cameras, lidar and radar market made even more strides in playing a role in other use cases outside of the automotive field. Do you expect 3D vision to continue to influence how these use cases evolve for smart cities, traffic management, warehouses, healthcare and more?

Chen: I believe 3D vision technology will become increasingly vital across domains over the next few years. To illustrate why, let’s look at the current shift toward multimodal AI. The majority of latest algorithms require both 2D and 3D visual data to function optimally. Now, some approaches attempt to infer depth from planar images before feeding this information to the model. However, our 3D imaging hardware directly captures accurate, holistic scene geometry to supply AI systems.

Rather than estimating, we provide the definitive positional inputs needed for advanced perception, planning and interaction tasks. So, from this perspective alone, as the prominence of AI continues to grow, the applicability of 3D machine vision will expand in parallel given this crucial role synthesizing the environments autonomous technology operates within.

8. What do you see as the biggest trends happening for 3D cameras in 2024 and beyond?

Chen: If asked about the biggest growth area on the horizon for 3D cameras, I would certainly respond with robotics and industrial automation — especially robotic manipulators and collaborative robots (cobots). The reason being that current advances in AI will massively elevate the intelligence and capabilities of automation. And as these systems take on more dynamic perception and interaction tasks, the sensor acting as their eyes becomes crucial.

More Blogs

Decoding Depth Camera Performance: Quantitative Evaluation of Accuracy and Precision

A Comprehensive Guide to Warehouse Automation

The Impact of RGB-D Cameras on Robotic Vision Systems

Stay updated

Be the first to learn about our new
products and updates.