Will robots eventually effectively replace surgeons?

As with many other innovations, the military was where the earliest surgical robotics advancements were made.

Will robots take over surgeons

Source: Google Images

Main highlights:

When NASA envisioned robots that might be controlled remotely to give medical support to astronauts during flights, the concept of developing surgical robots first emerged in the 1970s. This failed, primarily as a result of the great separation between earth and the men in space, which caused delays in the operations. Shorter distance solutions on Earth were looked into because the concept wasn’t fully realised.

As with many other innovations, the military was where the earliest surgical robotics advancements were made. The US military created the first remote-control robot prototypes to operate on soldiers in the 1980s. The military understood that telesurgery would make it possible to provide medical care and treatment to soldiers in the field while enhancing safety.

Finally, UBC Hospital in Vancouver performed the first orthopaedic surgery with a robot in 1984. The Arthrobot robot’s primary duty was to hand the surgical tools to the surgeon in response to a spoken command. Over 60 arthroscopic procedures utilising Arthrobot could be counted after a year.

Researchers from Carnegie Mellon and the University of Minnesota have made a significant advancement in robotics and brain-computer interface (BCI) technology by creating a method enabling a person to control a robot arm with their brains, without the need for surgery or other intrusive procedures. 

As demonstrated by its ability to follow a computer cursor as it moves across a screen, the mind-controlled robot used in this experiment also exhibited strong motor control.

This is undoubtedly a huge step forward for the field because it demonstrates the viability of brain-based computer control in general, which has a wide range of potential applications, not the least of which is giving people with paralysis or other movement-related disorders a different way to use computers.

To date, systems using brain implants, which detect signals from within the user, have been necessary for successful, incredibly accurate BCI demonstrations and executions in people.

These gadgets are not only expensive and risky to the implant, but their long-term effects are also not always clear. Due to their limited usage, as a result, only a select few people have been able to benefit from their effects.

The system that the CMU and University of Minnesota research team created can deal with the lower signal quality that results from utilising sensors that are attached to the skin as opposed to ones that are used outside of the body. They were able to collect signals from the user that are coming from deep within the brain using a combination of new sensors and machine learning technologies but without the kind of “noise” that is often present with noninvasive procedures.

Given that clinical studies will shortly begin, this ground-breaking finding may not even be that far from transforming the lives of actual patients. 

Dealing with uncertainty: 

Similar to drivers, surgeons must learn to navigate their unique settings, which may seem simple in theory but are incredibly challenging in practice. Real-world roads feature traffic, construction equipment, and pedestrians—all of which the automobile must learn to avoid because they don’t always appear on Google Maps.

Children’s movies are correct: We are all unique on the inside, despite the fact that human bodies are typically similar to one another. People often have different organ sizes and shapes, amounts of scar tissue, and where blood vessels or nerves are located.

According to Barbara Goff, a gynecologic oncologist and chief surgeon at the University of Washington Medical Center in Seattle, “there’s so much variance in the individual individuals. “I believe that could be difficult.” She has been employing laparoscopic surgical robots for more than ten years; they are the ones that translate the movements of the surgeon but do not move independently. 

The movement of bodies adds another layer of complexity. Some robots already exhibit some autonomy. One of the most well-known examples is a machine with the (possibly a little on-the-nose) name ROBODOC, which is used in hip surgery to remove bone from around the hip socket. However, bone is quite simple to deal with and doesn’t move much once it is fixed into position. According to Aleks Attanasio, a research professional currently employed by Konica Minolta who wrote about surgical robots for the 2021 Annual Review of Control, Robotics, and Autonomous Systems, “Bones don’t bend.” And if they do, the issue becomes more serious.

Unfortunately, locking in the rest of the body is more difficult. Even before a surgeon enters and begins moving things around, for example, muscles contract, stomachs gurgle, brains jiggle, and lungs expand and contract. Furthermore, whereas a human surgeon can clearly see and feel what they are doing, how might a robot determine if its knife is in the proper location or if tissues have moved?

Source: Google Images

The employment of cameras in conjunction with advanced tracking software is one of the most promising solutions for such dynamic scenarios.

For instance, Johns Hopkins University researchers employed a machine dubbed the Smart Tissue Autonomous Robot (STAR for short) to do the potentially highly jiggly process of stitching two ends of a cut intestine back together in an anaesthetized pig in early 2022.

To make markers the robot can follow, a human operator applies drops of fluorescent glue to the ends of the intestine (a bit like an actor wearing a motion-capture suit in a Hollywood movie). A grid of light points cast onto the area is used by a camera system to construct a 3-D representation of the tissue at the same time. These technologies work together to provide the robot with the ability to see what is in front of it.

According to Justin Opfermann, an engineering PhD candidate at Hopkins and co-designer of the STAR system, “What’s truly interesting about our vision system is that it not only allows us to reconstruct what that tissue looks like, but it also does so quickly enough that you can do it in real-time.” “You can detect and follow anything that moves during the procedure,” the surgeon said.

Using this visual data, the robot may then determine the optimal course of action and give the human operator various options or check in with them in between sutures.

In tests, STAR performed admirably on its own, if not flawlessly. Overall, 83 per cent of the sutures could be completed automatically, however, 17 per cent of the time a human intervention was still required to make corrections.

The 83 per cent can undoubtedly be defeated, according to Opfermann. According to him, the majority of the issue was that the robot occasionally needed assistance from a person to help it identify the correct angle at specific corners. Success rates in more recent, unpublished trials are currently in the upper 90s. In the future, a human might simply need to approve the plan before it executes without further human participation. 

 

Exit mobile version