This New MedTech Innovation Creates Natural Motion Through Prediction
May 29, 2018 - 4 minutes readAs technology continues to improve many applications across most industries, one of the greatest improvements for us, physically and mentally, will be better healthcare and health monitoring devices. Though we have Fitbit and Garmin to help track our steps and hikes, these devices will seem rudimentary when compared to what the future will bring.
Biomedical engineering researchers at North Carolina State University and the University of North Carolina at Chapel Hill recently tried to create a prosthetic whose user can do everything they could’ve done with their original limb. In other words, the prosthetic needed to read the user’s thoughts and translate them into actions.
Data in Motion
He (Helen) Huang led the team that created a user-generic musculoskeletal computer model of a forearm, wrist, and hand. The group tracked the data that resulted from signals that six volunteers made when moving their hands and arms. The computer model was fed this data, and it learned from the base data how to recognize brain signals that indicate movement for the forearm, wrist, and hand.
Huang explained the relationship between the model and the prosthetic: “The model takes the place of the muscles, joints, and bones, calculating the movements that would take place if the hand and wrist were still whole. It then conveys that data to the prosthetic wrist and hand, which perform the relevant movements in a coordinated way and in real time — more closely resembling fluid, natural motion.”
The first rounds of testing were promising; although the prosthetic and model were minimally trained, both pieces of equipment behaved as expected.
Man or Machine?
The model didn’t utilize machine learning, which many other previous iterations by other teams used. It’s a convenient method to keep teaching the model to improve its results. However, since the patient’s brain still thinks the amputated limb is still connected to the body, this method of “pattern recognition” is not ideal as the process consumes a lot of time and computing power.
“Pattern recognition control requires patients to go through a lengthy process of training their prosthesis. This process can be both tedious and time-consuming,” says Huang. “[E]very time you change your posture, your neuromuscular signals for generating the same hand/wrist motion change. So relying solely on machine learning means teaching the device to do the same thing multiple times; once for each different posture, once for when you are sweaty versus when you are not, and so on.”
Pushing the Possibilities of Prostheses
The researchers published the findings from their story in the IEEE Transactions on Neural Systems and Rehabilitation Engineering journal. Although they predict it will still take several more years to perfect the model, the approach to self-improving the technology is a fresh idea with promising early results.
We always love learning about new MedTech developments, and it’s always a nice surprise to learn they’re not from the usual tech hubs like New York City or San Francisco. This prosthetic innovation, in particular, has the potential to improve the way of life for patients from all walks of life!
Tags: healthcare, MedTech, MedTech app developers, MedTech app development, mobile app developers New York City, mobile app development, mobile app development New York City, New York City mobile app developers, New York City mobile app development, New York MedTech app developers, prosthetic devices, prosthetics, wearables