Autonomous systems and robotics

An autonomous system is a system with a trigger that causes an action without the involvement of a person.  Two well known examples are tsunami buoys and earthquake sensors.  These continuously monitor a series of sensors.  When the sensors register a reading that exceeds a predefined threshold (the trigger), a signal is sent to a computer to generate warning alerts (the action).  These systems can range from the very simple to the extremely complex.

One could argue that the backup sensors on cars are an autonomous system: the trigger is the driver shifting into reverse, the computer turns on the back up lights, activates the camera, turns on the internal screen, and activates the sensors to start beeping.  I think it is a bit of a stretch but it does provide a simple example.

The other side of this spectrum is the Mars Rovers.  Due to the time delay in communications between Earth and Mars, the rover cannot be directly controlled.  NASA gives a general command for the rover to head to a new destination.  The rover acts independently to drive over the terrain, and makes decisions to avoid obstacles.  On-board the rover is the Autonomous Exploration for Gathering Increased Science (AEGIS) software — part of Onboard Autonomous Science Investigation System (OASIS) —to automatically analyze and prioritize science targets.  This allows more work to be done without human intervention.

Somewhere between the two is the Roomba.  This autonomous vacuum moves around the house cleaning up.  It will learn the layout of a room to be more effective in future runs.  When complete, it docks to recharge.  At a set time interval, it does this again.  Now don’t laugh about the Roomba; it is a very commonly hacked robot for people interested in DIY robotics.  Microsoft Robotics Developer Studio software has built in modules specific to controlling the Roomba.  That gives mobility and a way to control it.  Microsoft has released additional modules for the Robotics software which adapts the Kinect sensor to sense the environment.  This addition provides vastly better sensors over the traditional range-finder and light sensors.  Microsoft isn’t the only player in this market; Lego Mindstorms markets robotics as a toy for children ages eight and up.  Robotics isn’t just for engineering students in high-end tech universities.

There is enough technology in existence today to make huge leaps in the use of robotics.  The main challenge of robotics is the acceptance by the general public.

Watch the videos about the DARPA autonomous car, Urban Search and Rescue robots, BigDog robotic transport and Eythor Beder’s demonstration of human exoskeletons.  Combine these and we can vision some major transformations.

Take the computational guts from the autonomous car and put them into a fire engine.  Now the fire engine can be dispatched and navigate to a fire scene independently.  Once on-scene, sensors can detect the heat from the fires – even if they are in the walls and not visible to the human eye.  Robots from the fire engine can be sent into the structure.  Heavier duty robots can pull hoses and start to extinguish the fire.  Other robots can perform a search on the house to assist survivors out, and rescue those unable to escape.  Communication systems will link together all the sensors on the robots to generate a complete common operating picture that all robots use in decision making.

A similar thing can be done with an ambulance.  Focus just on the stretcher.  Imagine if the stretcher would automatically exit the ambulance and follow the medic with all the necessary equipment.  The stretcher could be equipped to up and down steep stairs, make tight turns and remain stable.  Automating the lifting and bending done by EMS workers handling patients on stretchers would reduce the number of back injuries caused by improper lifting.  This would keep more EMS workers in better health which reduces the sick leave and employee compensation claims.

Robotics in search and rescue could be the one thing that saves the most number of lives, both in the victims and the rescuers.  A building collapses and the SAR team responds.  On the scene, the workers setup specialized radio antennas at different points around the building site.  They release dozens, if not hundreds, of spider or snake like robots.  Each robot has the ability to autonomously move through the rubble.  They are light enough to not disturb as a human would do.  They are numerous enough to more quickly cover the building as a human would do.  The combined sensor data of their location and scans would quickly build a three dimensional model of the rubble.  They are location aware of each other so they don’t bunch up in one area or miss another.  Heat, sound and motion sensors could detect people.  Once this initial scan is done, the SAR team will be able to know where the survivors are and communicate with them through the robots.  The team will evaluate the 3D model for the best entry paths to get to and rescue the survivors.  If the situation is unstable, larger robots can be used ahead of the team to provide additional structural support to reduce the risk of collapse.  If a robot can’t communicate with the antennas outside, the robots can do mesh networking to pass information along.