Philadelphia. It’s known for it’s Philly Cheesesteak, the Liberty Bell, and where the Constitution was signed. Always on the cutting edge, Philadelphia is a land of firsts. You may or not know this, but one of its firsts was to have the first general use computer in 1946. Is it any wonder then that a company there is building robots to navigate GPS denied environments and was begun by leaders in the Computer Vision space? Beyond the RoombaIf you consider the Roomba, the autonomous vacuum that sweeps up pet hair, dirt, and other unwanted product, how does it know where to go? How does it know to go under a table or chair or around a wall to the next room? How does it know to avoid the dog, cat, or you? On nearly the smallest scale, this little round machine is a personal version of simultaneous location and mapping (SLAM). However, the computational geometry method of this mapping and localization technique extends in a wide variety of arcs. Here are a few to get you thinking:GPS Navigation SystemsSelf-driving carsUnmanned Aerial Vehicles (UAV)Autonomous Underwater Vehicles (AUV)DronesRobotsVirtual Reality (VR)Augmented Reality (AR)Monocular Camera…and moreThere’s even a version which is used in the Life Sciences called RatSLAM. But we’ll visit that in another article. The uses and benefits of this simultaneous location and mapping technique are exponential even with some of the challenges posed by Audio-Visual and Acoustic SLAM.What is SLAM?Essentially, it is the 21st century version of cartography or mapping. Except in this case, not only can it map the environment, but it can also locate your place in it. When you want to know where the nearest restaurant is, you simply type in ‘restaurant near me.’ And soon, a list appears on your phone with a list radiating from nearest location outward. Imagine you’re lost on a hike, you manage to find signal, and soon your GPS is offering directions on which way to move toward civilization. This is Simultaneous Localization and Mapping. It locates you, your vehicle, a robot, drone, unmanned aerial vehicle or self-driving car and puts people and things in the direction it thinks they want to go or should go to get to safety.While mapping is at the epicenter of SLAM Computer Vision Engineering, there are other elements within the field as well. But let’s begin with mapping. Topological maps offer a more precise representation of your environment and can therefore help ensure consistency on a global scale. Just as humans do when giving directions, sensor models offer landmark-based approaches to make it easier to determine your location within the map’s structure and raw-data approaches which makes no assumptions. Landmarks such as wifi or radio beacons are some of the easiest to locate, but may not always be correct which is where the raw-data approach comes in to offer its two cents as a model of location function.Four Challenges of SLAMGPS sensors may not function properly in chaotic environments such as military conflict. }Non-static environments such as pedestrians or high traffic areas with multiple vehicles make locations difficult to pinpoint.In Acoustic SLAM, challenges include inactivity and environmental noise as well as echo. Sound localization requires a robot or machine to be equipped with a microphone in order to go in the requested direction.Five Additional Forms of SLAMTactile (sensing by touch)RadarAcousticAudio-Visual (a function of Human-Robot interaction)Wifi (sensing strength of nearby access points)Ready to Explore a Robotics and Computer Vision Career?Whether you’re interested in a slam dunk career as a SLAM Engineer or looking for your first or next role in Big Data, Web Analytics, Advanced Analytics & Insight, Life Science Analytics, or Data Science, take a look at our current vacancies or get in touch one of our expert consultants to learn more. For our West Coast Team, contact us at (415) 614 – 4999 or send an email to sanfraninfo@harnham.com. For our Mid-West and East Coast teams contact us at (212) 796-6070 or send an email to newyorkinfo@harnham.com.