Enterprise

Lidar sensors cruise from self-driving cars to digital twins and the metaverse

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Lidar technology has gained attention among a slew of advanced technologies promising to create tomorrow’s self-driving cars. But lidar sensors are also prominent in efforts to produce digital twins and metaverse use cases.

A core aspect of digital twins lies in updating models of the real world with high fidelity and at high frequency. Lidar complements technologies like stereoscopic cameras for capturing 3D data from the physical world that could be piped into digital twins or metaverse applications.

Core lidar technology has been around for nearly 50 years, but until recently it has been expensive to build and complicated to weave into new workflows. All of that is starting to change with the advent of new approaches, cheaper implementations, and more flexible lidar data workflows.

Lidar sensors show promise in autonomous cars and related systems because they measure the distance to objects at incredible speeds and with high precision. They work much like radar, using light rather than radio waves — the lidar signal is reflected off objects and the light that returns is then measured. Traditional lidar measures the time of flight of signals, but newer techniques use other properties of light to cut costs, improve precision, or speed reaction time.

The business case for lidar sensors

The price of these systems has dropped from tens of thousands of dollars a few years ago to tens of dollars for mass-produced sensors embedded into tablets and smartphones such as the iPhone 12 Pro. Apple has kept mum about its lidar source, but one teardown of the first iPad with the tech found a laser scanner from Lumentum and sensor from Sony combined into a lidar system.

Smaller lidar form factors will enable terrestrial lidar capture to proliferate across indoor spaces and within hard-to-access locations, from pedestrian pathways to corporate campuses, Here Technologies product management director Mark Yao told VentureBeat. His company is taking advantage of significant breakthroughs in hardware, software, and AI to create applications that automatically extract the features and 3D objects from lidar to automate digital twin data capture.

Fixed opportunity

The biggest advantages of lidar compared with cameras include longer range, high 3D resolution, and better performance in situations with bad visibility. At the same time, lidar suffers from a lack of facial recognition capabilities, higher cost, larger data processing requirements, and lack of awareness about the technology’s benefits.

“As a consequence, lidar is often used in combination with other sensors to enable high-quality and high-reliability sensor fusion, especially in the automotive environment,” ABI Research analyst Dominique Bonte told VentureBeat.

Meanwhile, much-watched Tesla’s ability to offer some self-driving capabilities without using lidar has also squeezed the market for lidar, ABI smart mobility and automotive analyst Maite Bezerra said.

Updating digital twins at scale

But in the world of digital twins, lidar may have much room to run. Lidar’s lack of facial recognition capabilities could be a feature — not a failing — in a world with growing privacy concerns around facial recognition. Indeed, privacy-conscious data is vital for retail and smart city uses cases, Seoul Robotics CEO HanBin Lee told VentureBeat.

For example, retail stores are using 3D data to track metrics like how long the checkout process is, how customers are moving through the store, which products they are engaging with, and for how long. Mercedes-Benz, for example, leverages Seoul Robotics’ technology in some of its showrooms to see how visitors interact with the vehicles on display.

German railway company Deutsche Bahn utilizes Here lidar technology for its new Sensors4Rail railway digital twin. The data makes it possible to track changes to objects that may impact operations, such as buildings, poles, and platform edges.

Hitachi ABB Power Grids senior VP of enterprise solutions Bryan Friehauf also sees a significant role for lidar in precisely monitoring electric utility infrastructure. He believes lidar is crucial for collecting dynamic geographic data, such as vegetation growth, erosion, or impacts from climate change.

In Germany, the city of Cologne uses lidar to complement a citywide digital twin. Other data sources include BIM building data, digital terrain model data, and high-resolution photos integrated into ESRI CityEngine. Various models for simulating noise, air pollution, flooding, traffic, and energy usage can improve planning or emergency response.

Lidar can also aid scientists and engineers in building more accurate and realistic digital twins of coasts and oceans for planning fisheries; assessing the impact of climate change on coasts; and engineering offshore infrastructure, such as windmills and oil rigs.

Ocean exploration is also an area of lidar interest, according to Terradepth cofounder and co-CEO Joe Wolfel. “As humans seek to understand more about Earth’s ocean and in an attempt to halt climate change and damage to 98.5% of Earth’s biosphere, maritime data is becoming increasingly important,” he said.

Bathymetric lidar uses specially colored lasers to penetrate deeper than visible light to quickly create precise maps 25-50 meters in depth. And companies like Terradepth are mounting lidar on underwater autonomous subs that can survey much deeper.

Many flavors of lidar

Lower-cost and more-performant lidar is opening up new opportunities for the tech. “Lidar technology has been following the playbook of the semiconductor industry,” AEye senior VP of AI and Software, Abhijit Thatte, told VentureBeat, “Lidars and their components, such as lasers, scanners, optics, receivers, compute platforms, and housings, are becoming faster, smaller, and cheaper with technological improvements and mass production,” Thatte continued.

The algorithms are also better at helping lidars establish where and when to focus to increase the proportion of relevant data at higher resolutions, higher ranges, and higher speeds while simultaneously reducing irrelevant data.

The market has enjoyed various flavors of lidar technology. Gartner VP of semiconductors and electronics Gaurav Gupta told VentureBeat that the initial lidar landscape was dominated by players like Velodyne, with mechanical scanning lidars. But now the focus is on solid-state lidars.

Lidar technology involves one component for reliably scanning a laser signal over an area and a receiver for precisely measuring the response. Beam guiding approaches include micro mechanically controlled mirrors (MEMS), proprietary mirrors, flash, and optical-phased arrays (OPA). There are also various sensing approaches based on time of flight and frequency modulation techniques. Gupta said there is no clear technology winner, and he expects different technologies to gain market based on the right combination and alignment with use case and costs.

Examples of vendors supporting the various technologies include:

  • Mechanical scanning lidar (Velodyne lidar, Valeo, Ocular Robotics, Ouster)
  • MEMS and proprietary mirror lidar (LeddarTech, Innoviz, AEye, SosLab, Luminar)
  • Flash lidar (Argo AI, Sense Photonics, Continental, Ibeo Automotive Systems)
  • Optical phased array (Quanergy Systems, Baraja)
  • Frequency-modulated continuous wave (Aeva Technologies, Insight lidar, Aurora, SiLC, Analog Photonics)

Need for simplicity

The wide variety of lidar technologies is great for competition and driving down costs. But it can also complicate lidar application development for digital twins and metaverse use cases. While most vendors support common use cases like autonomous driving, digital twin applications create different data workflows and app development processes.

Here’s Yao said lidar systems require different processes for ingesting, storing, and accessing data. Differences also span features like point cloud densities, scan rates, alignment, and coordinate systems.

This process grows more complicated when teams try to combine this data with 2D imagery, AEye’s Thatte said. For example, a camera captures all pixels in a 2D image in the same instance, while a lidar captures a range of pixels at slightly different times. Both depth and motion of objects must be accounted for, and things become even trickier in low light and adverse weather conditions.

Vendors are starting to address these challenges with software development kits and platforms that help unify data from different kinds of lidar and camera sensors. For example, Seoul Robotics has created a 3D data integration platform that normalizes data from all sensors into a common format called SENSR2.

Lee believes this could help provide a middle ground that makes it easier for lidar manufacturers and the companies building lidar apps. “There are so many players in the lidar sensor game, which is excellent because it means that we can customize solutions depending on factors like budget and level of detail required,” Lee said. “But in order to effectively do this, we need to have 3D perception software that is compatible with all of these different sensors.”

Lighting up digital twins

In the long run, lidar improvements and cost reductions promise big gains outside of traditional automotive apps. ABI sees opportunities for the tech in verticals like smart cities, industrial supply chain, and security.

“Many of the lidar suppliers, including Quanergy and Velodyne, are exploring these new use cases and markets as the driverless agenda keeps getting postponed,” Bonte said. He expects the automotive market for lidar to remain subdued until the widespread launch of robotaxi services. ABI predicts only a small volume of SAE level 4 and level 5 self-driving cars in the next 4-5 years. The firm’s analysts are forecasting less than 500,000 automotive lidar shipments in 2025, increasing to 6.5 million in 2030.

In the meantime, ABI foresees plenty of opportunities for fixed uses of lidar. These include traffic monitoring and management (with resolution and robustness gains over thermal imaging); people density and flow tracking and monitoring for COVID-19 use cases; and improving industrial robot management in manufacturing plants and warehouses at scale. Lidar uses could very well grow right along with the many digital twin use cases that continue to emerge in diverse industries and settings.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Articles You May Like

SideWinder APT Strikes Middle East and Africa With Stealthy Multi-Stage Attack
WhatsApp for Android Reportedly Testing Snapchat-Like Camera Effects With Filters and Backgrounds
Researchers Discover Severe Security Flaws in Major E2EE Cloud Storage Providers
Flipkart Big Diwali 2024 Sale Offers Revealed: Best Deals on iPhone, Samsung, Motorola and Nothing Phones
Google Granted Request to Pause Order on Play Store Overhaul