While working as a Software Engineer at the Socially Intelligent Machines Lab I led the software and hardware integration on a new socially capable robot. The primary requirements were to design a robot that
- is mobile - is autonomous - has a robotic manipulator with a suitable workspace - is socially expressive
Due to time and budget constraints, many of the requirements were fulfilled through a design using commercial off-the-shelf components, like an RMP Segway base, a Kinova Jaco robotic arm on a linear actuator to expand the workspace.
The social expression was achieved primarily through two additional areas: aesthetically through a semi-anthropomorphic design of a chest plate and actuated head unit, and an low-res LED facial display on the front of the head unit.
The development of the eye panel was my favorite part of the project, and required iterating on size and material for proper light diffusion. The end result was a panel and embedded controller that was able to quickly emote, which proved to greatly alter users’ perception of the robot. I also developed an LED face library for the embedded controller that would make it easy for researchers to add faces for user studies.
The hardware and software integration included writing ROS wrappers for components with existing drivers, writing drivers from scratch for those that did not, and writing a technical manual for users of the platform. The robot is currently patent-pending.
You can see the robot (without its protective breastplate) in the video above, and the open source software repository here. Overall the project was around 300,000 lines of code, and is a full-fledged ROS software stack that enables the robot to navigate autonomously with obstacle detection and run in simulation. There was a particular emphasis on making the system easy to run and “just work”.