By Morgan Pope, Research Scientist, Disney Research
Most robotics projects focus on the output: What does this robot do? Is it reliable, is it precise, and can it achieve its goals? But at Disney, our focus is on the story: How does this robot make you feel? Is it emotive, is it relatable, and does it authentically reflect a character people know in its mannerisms, gait, or expressions?
This context changes everything. Take walking, for example—in robotics, it’s generally a priority to maximize the stability of a walking gait, since falling down doesn’t help you move crates or explore terrain. At Disney, however, a stable walking gait is less important than a gait that brings a character to life. Falling down can be wildly entertaining, as long as the falling happens in character!
About a year ago, our team came to a realization: we needed robots that didn’t mind taking the occasional tumble. If we’re going to be free to explore fun and evocative performances with our robots, failure had to be an option. And not only that—failure had to be expected, and built into the design. We called our new project “Indestructibles” and set out toward the goal implied by that name.
On March 10, 2023, we were able to show off our latest Indestructibles prototype at SXSW in Austin, Texas. We were nervous. We knew this little character had charmed us, but we couldn’t be sure her personality would come through on such a big stage with a brand-new audience. But from the moment she peeked her head out of her crate, the energy in the crowd let us know they were not only seeing her, but cheering for her. We were thrilled!
Getting to this point took a lot of exploration. At first, we were a little intimidated by the idea of making a robot that would bounce back from a fall. But after a few months of dropping ideas (and robots) on the floor, we found it was a fairly tractable problem. What’s more, we discovered it’s possible to make components out of ordinary materials that can survive large drops and big hits, especially at smaller scales where strength-to-weight ratios are in our favor. Protecting delicate electrical equipment was a bigger challenge, but reducing our points of failure and providing shock absorption in the right places kept us moving forward.
As excited as we were by the durability of our robot, we realized durability alone was not enough.
In conventional robotics, physics is the final judge of what works best. Sizing a strut and positioning the center of mass can be done carefully in a computer, and the result in hardware is likely to match the intention. When the goal is creating a character, human hearts and minds are the final judge, instead. People are much harder to simulate, and the full effect of a performance can only really be felt by being in the same room as the robot. So, the speed of our development was limited by how fast we could translate a new idea from concept to embodied performance.
That meant a new approach to both hardware and software. On the hardware side, we needed to be able to have a robot that could change and adapt in a matter of days rather than weeks, all while maintaining reliability. And on the software side, we needed elegant interfaces that could allow us to rapidly try out new motions—in search of emotions.
Mechanically, we adopted a modular design strategy built around a single size of actuator, keeping the scale small while using carbon fiber to minimize weight. We also made the decision to tolerate a certain amount of flexibility in the joints, sacrificing rigidity for motor protection and ease of construction. This made it easy to change the proportions of the robot to match different characters. Just as important, it made it easy to add and subtract degrees of freedom. “What if the robot was on roller skates?” became a question that could be answered quickly with a new pair of feet.
We also developed a simple, interactive software interface for the robot. We can move the robot by hand into keyframe poses, then blend those together smoothly to create motions already grounded in the physics of the robot. We kept the onboard code light, avoiding autonomy and restricting sensing to just motor positions, so we could rapidly adapt the software as we iterated on the hardware.
We also developed ways to pull key poses out of motion-capture data so we could directly integrate the human aspects of a physical performance. One early learning from this process was how holistic real human movement is. Every part of the body moves sympathetically with every other part, even if it’s subtle. Programming a robot directly, it can be easy to only move the joints producing the main action, and some life is lost as a result.
Moving from motion capture to the robot isn’t seamless—the robot has a different mass distribution than a person, and a much more limited set of joints. But the effort is worth it when you see our latest prototype wobbling around in a way that makes you feel like you’re looking at a character rather than a robot.
So what’s our biggest takeaway from the past year? There is a huge unexplored space of robotic locomotion that evokes human emotion. By shifting our emphasis toward the way we reach our goal—rather than just the end result—we’ve opened up what feels like a world of possibilities for dynamic and expressive robots.
Which is why we’re so excited about what lies ahead!