MIT researchers have developed a soft and rigid robotic hand that can accurately identify objects with just a grip. The design features a rigid skeleton encased in a soft outer shell with high-resolution sensors embedded under its transparent skin.
These sensors provide continuous touch sensing across the length of your finger, offering more comprehensive data about the objects you’re grasping.
The robotic hand is made up of a 3D printed endoskeleton encased in a transparent silicone skin, molded in a slightly curved position to resemble a human hand. This design reduces the amount of uncontrolled wrinkles that could affect the performance of the hand. The endoskeleton contains GelSight sensors, made up of a camera and colored LEDs, that capture images when the hand grasps an object. An algorithm then maps the contours onto the object’s surface.
The system is capable of recognizing objects with 85% accuracy after a single grip. This design allows the hand to lift heavy objects and securely grip flexible objects without crushing them. Potential applications include home care robots for the elderly.
“Having soft and rigid elements is very important in any hand, but so is being able to do great detection over a really large area, especially if we want to consider very complicated manipulation tasks like what our bare hands can do,” he said. researcher Sandra Liu.
“Our goal with this work was to combine all the things that make our human hands so good into a robotic finger that can do tasks that other robotic fingers currently can’t do.”
The researchers used a machine learning model to identify objects using raw camera image data. In the future, they plan to reduce silicone wear, improve thumb actuation, and potentially add sensors to the palm for better tactile distinction.
Come and tell us your opinion on our Facebook, Twitterand LinkedIn Pages, and don’t forget to sign up for our weekly Additive Manufacturing newsletter to get the latest stories delivered straight to your inbox.
Leave a Reply