AI Autonomous PLATFORM, Synaptree™
Provides an optimal platform for more advanced AI autonomous driving.
INTEGRIT uses cameras and lidars to locate GPS-free indoor environments and
provide algorithms for positioning sophisticated location data.
In an environment such as a large shopping mall, it requires sophisticated and proven algorithms and
systems to generate sophisticated positioning data for autonomous robot driving by recognizing various lighting systems,
unpredictable customer movements, sensor shading areas, and non-textured glass or reflective surfaces.
Another Step Forward for AI Autonomous driving, Synaptree™
In addition to the SLAM implemented by the existing LiDAR, INTEGRIT is designed to recognize the sophisticated interior space and control the position of the sophisticated robot Implement iSLAM
that categorizes indoor structures and optimizes and corrects SLAMs through a cloud that virtualizes positioning data.
It has a dataset and visualization system that improves autonomous driving performance while updating real-time spatial information.
Through sensors, we are enhancing the stability and trust of indoor self-driving robots through strong connected intelligence beyond the existing methods that rely on self-driving functions and performance.
Fusion Sensing & iSLAM
INTEGRIT’s Synaptree VL was considered for self-driving robots that could be operated in mega-places with vast indoor areas such as indoor shopping malls.
Through a powerful LiDAR that supports 25m survey range, SFM process to match through 2 RGB-Depth cameras,
newly defined data sets to generate real-time point clouds, and pipeline to analyze and learn high density data sets in real time.
Through a separate parallel process, the Synaptree dataset for indoor spatial information and positioning is structured into a virtualized digital map
with metadata to visualize the robot’s sophisticated current location and trajectory, as well as to effectively estimate expected robot behavior patterns and trajectories.
It efficiently synchronizes different data issued by multiple sensors such as stereo camera, LiDAR, and ultrasound,
and the obtained data is derived as meaningful schema data and becomes a real-time database.
This DB spatial context, discrimination, and contrast process corrects errors in source data, provides predictive models, enables autonomous driving devices
to make optimal decisions, and the collected spatial data evolves into more sophisticated metadata through the learning process.
Spatial Data Semantic
Based on the attributes of the space, it compensates for real-time changing data and creates a driving route that reflects the user’s requirements.
Continuously update Meta data and verification of obstacles and spaces.
Intigritte’s cloud-based autonomous driving platform, which continues to increase accuracy through autonomous driving, has an architecture such as object learning, error correction, and prediction models, and is protected by patents.
Synaptree Technical Support
Support for multi-sensor datasets of component structures
Provides synchronization and access specifications for various sensor data
User support via SDK (C++/Simulink/Python)**
Support for ROS, Linux, and Ubuntu-based ARM platforms**
Visualization Interface Support
Datasets for Robotics
Go beyond the conventional method of relying on autonomous driving performance only with the robot's sensors.
Overcome the robot's self-driving limitations through strong connected intelligence.
INTEGRIT opens road to Robot Data Science.