Traditionally, LiDAR systems are used for reliable ranging and depth perception. However, the high cost of equipment, pilots and engineers kept the LiDAR technology at bay for most people and applications. Cutting edge LiDARs of today are still rather expensive devices, unable to output imaging modalities of a traditional video camera.
Two other popular methods for depth sensing – Time of Flight (ToF) and Structured Light Imaging (SLI) – are based on active emission of IR light, hence they require controlled lighting conditions and are mostly restricted for short range indoor use. Structure from Motion (SfM) technology – while attractive for its low equipment requirements – produces sparse depth maps and requires continuous motion in order to work, thus limiting application range. Finally, conventional stereo cameras have limited on-board image processing capabilities, shifting the burden of application-specific perception to the host system.
By contrast, Rubedo CVM has an integrated powerful NVIDIA Jetson TX1 GPU, which not only does on-board stereo matching, outputs pre-calculated disparities or dense point cloud to the user, but also can execute arbitrary image processing pipeline on behalf of a specific application (e.g. detect utility poles).