Even virtual reality has been used as a means to facilitate medical students' learning to perform surgery.

Focusing on this area, researchers from Google Brain, UC Berkeley and the Intel AI Lab joined forces to develop Motion2Vec, an AI model with the ability to gain knowledge related to performing robotic surgery tasks such as suturing, passing and inserting needles and tying knots. All this, as part of the training provided by a surgery video.

As a way to test the results, the researchers set out to implement the model in a two-armed Da Vinci robot, which passed the needle through a piece of cloth in the lab.

Thus, Motion2Vec constitutes a representational learning algorithm that is trained through the use of semi-supervised learning, thus following the same behavior pattern applied in previous models such as Word2Vec and Grasp2Vec.

On the other hand, the researchers expressed that their work is a sample of how video robotics used in surgery can be taken to a higher level, supporting it with expert demonstration videos. This, as a way to provide knowledge for the acquisition of new robotic manipulation skills.

Motion2Vec Training

As for the details about Motion2Vec, they were published last week in the arXiv prepress repository, then presented at the International Conference on Robotics and Automation (ICRA) by the IEEE.

For the training of the algorithm were used videos showing 8 human surgeons controlling Da Vinci robots. These videos were obtained from the JIGSAWS data set, an acronym used to refer to the JHU-ISI Gestures and Skills Assessment Work Package, a project formed by videos from John Hopkins University (JHU) and Intuitive Surgery, Inc (ISI).

These JIGSAWS videos showed the Motion2Vec representations focused on the movement of manipulation skills through learning by imitation.

Author

Write A Comment