Integration of Human Vision and Machine perception to Forecast the User's Desired Mode of Movement by Using Deep Learning Technique
DOI:
https://doi.org/10.46243/jst.2024.v9.i4.pp1-9-01Keywords:
deep learning, wearable robots, intent detection, machine perception, and human vision.Abstract
Wearable robot control relies on anticipating the user's preferred mode of locomotion to provide smooth transitions for the user when traversing different terrains. While machine perception has shown promise recently for detecting impending terrains in the trip path, current methods are unable to recognize human intent, which is necessary for coordinated wearable robot operation, and are instead restricted to environment perception. Therefore, the goal of this research is to create a new system that accurately forecasts the user's mode of movement by combining machine perception (which captures ambient data) with human vision (which represents user intent). The system can detect the user's intended path in a complicated setting with various terrains since it has multimodal visual information. Moreover, a fusion algorithm based on dynamic time-warping techniques To produce flexible judgments on the time of locomotion mode change for wearable robot control, a fusion technique was devised to align the temporal forecasting from individual modalities. Through the use of experimental data gathered from five people, the system's performance was verified. It demonstrated a high degree of intent detection accuracy (almost 96% on average) and dependable decision-making on locomotion transition with customizable lead time. These encouraging results show that combining machine perception and human vision may be used to identify lower limb wearable robots' intent to move.