FInd huge savings on retail prices at DailySale.com

Article By : Stefani Munoz

As AI and the physical world continue to intersect, machines are becoming more adept at perceiving their environments using lidar, radar and 3D vision.

As AI and the physical world intersect and adoption of autonomous technologies increases, one might question how machines and their currently brittle models could possibly perceive the world in ways humans do. With the help of sensor technologies such as those implemented in self-driving vehicles, including lidar, radar and cameras, machines are beginning to gather real-time data to inform decision-making and adapt to real–world scenarios.

Sensor technologies have become so embedded in our everyday lives that we might underestimate their impact. Take, for example, the thermostat: With just a few adjustments, this basic sensor technology dutifully keeps homes and offices at a desirable temperature without much manual intervention.

In the background, however, thermostats rely on bimetallic mechanical or electrical sensors that use thermal expansion to gauge the temperature, then manipulating an electrical circuit capable of switching heat or air conditioning on and off based on desired temperature. This is only a small example of the types of sensors that improve our lives.

Lidar, radar sensors for AVs

More recently, automotive manufacturers have been pushing for fully autonomous driving with the help of sensors. Some companies are specifically manufacturing lidar (light detection and ranging) sensors to assist with object detection.

Hughes Aircraft Co. is widely credited with introducing lidar technology in the early 1960s, primarily designed for satellite tracking with the help of laser-focused imaging that allowed engineers to calculate distances.

Today, many companies are adopting direct time–of–flight lidar sensors that use lasers to emit pulses of light waves that will then bounce off surroundings and obstacles. Lidar then measures the amount of time it takes for those pulses to return, thereby determining the distance between sensor and object. Lidar sensors are also capable of creating a “map” of object surfaces as the light waves strike it.

See also  Schools must stop students falling down conspiracy theory rabbit hole: experts — EducationHQ

In real-world scenarios, companies use lidar for a variety of applications to enable machines to perceive the world around them, including warehouse management, advanced driver assistance systems, construction projects, pollution modeling and more. Companies such as Mobileye and Daimler are implementing lidar technology in their self–driving prototypes.

For example, Mobileye’s latest EyeQ Ultra SoC uses four classes of proprietary accelerators known as XNN, PMA, VMP and MPC, which in turn rely on two sensing subsystems: one camera–only component and the other a combination radar and lidar. Mobileye claims EyeQ Ultra SoC would enable autonomous Level-4 driving, defined by the Society of Automotive Engineers as vehicles that can perform all driving functions without manual intervention under specific conditions. If these conditions aren’t met, however, the driver must take control of the vehicle.

Amazon’s autonomous robot, Bert, in action (Source: Amazon) (Click image to enlarge)

Another example of lidar adoption in a real–world scenario is Amazon’s autonomous robots: Bert, Kermit and Ernie. Bert uses lidar technology to guide it throughout Amazon warehouses, avoiding obstacles such as other self–driving robots, workers and machinery.

Manufacturers are also using lidar to improve their logistics chains, relying on autonomous robots to optimize fulfillment and distribution processes.

3D vision for robots

Adoption of lidar technologies for both industrial and automotive use cases has seen limited success. Hence, engineers are realizing that fully autonomous machines are complex technologies requiring much more reliable AI and machine learning algorithms. This is where 3D vision can help advance autonomy.

3D vision is often implemented in factory automation applications such as pick–and–place robots. These machines rely on 3D snapshot sensors that allow the robot to essentially detect an object regardless of its position, meaning it can detect whether an object is lying flat, upright or is in a hanging position.

See also  Tucker Carlson reacts to Hunter Biden's business dealings: Whatever helps China, Joe Biden has dutifully done

Illustrations from Seoul Robotics depicting the capabilities of its SENSR software. (Source: Seoul Robotics) (Click image to enlarge)

LMI Technologies, a 3D scanning and inspection developer, created its own version of a 3D sensor, the Gocator 3,000, that relies on fringe projection using blue–LED structured light in combination with several 3D measurement tools and decision–making logic. The combined technologies allow sensors to scan and inspect any object with stop-and-go motion, enabling quality control inspections and automated assembly.

3D vision is also being used to process data gathered by lidar sensors to render detailed images of the environments they scan. Seoul Robotics, a 3D vision computer software company, released its first 3D vision software in the U.S. in January 2021. The software relies on the company’s ML-based SENSR software, allowing 3D sensors to essentially become IoT devices. The South Korean company claims sensors can analyze and understand 3D lidar data gathered from vehicle-to-everything communications, traffic safety technologies, retail analytics and smart cities.

Geospatial AI

Some observers predict location-based geospatial AI represents the next big step in ML, allowing machines to gather real-time geographical data to guide decision-making and prediction analysis. Use cases for geospatial AI include logistics, agriculture and infrastructure.

Geospatial AI uses a combination of spatial science, deep learning, data mining and high–performance computing to gather and analyze spatial data collected by networks of machines. Geospatial AI also relies on user data to train algorithms that provide inference and prediction capabilities.

For example, ride-sharing companies such Uber and Lyft rely on geospatial AI applications to provide estimated arrival times based on information submitted by customers. GPS applications such as Waze and Apple Maps also rely on geospatial AI to provide drivers with the quickest possible route to their destination based on traffic analysis software and user input. Geospatial AI is also being implemented in logistics and supply chain processes to allow manufacturers to obtain timely delivery data.

Lawrence Taylor - CBD Oil & Pain Relief Cream Bundle - 45% OFF
See also  Disha Patani Upcoming Movies 2021 & 2022 with release date, budget, trailer, etc.

This article was originally published on EE Times.

Stefani Munoz is associate editor of EE Times. Prior to joining EE Times, Stefani was an editor for TechTarget and covered a host of topics around IT virtualization trends and VMware technologies.

 

Source link

Previous articleFrenchly’s Unofficial Guide to French Grown-Up Woman Style
Next articleTop three reasons why Cancel Culture is crumbling

LEAVE A REPLY

Please enter your comment!
Please enter your name here