Enhancing Fall Risk Assessment: Instrumenting Vision with Deep Learning During Walks

Introduction

Fall events are common across various clinical populations, with usual risk assessments including visual observation of individual gait. However, gait observation assessments are typically confined to laboratory settings, involving standardized walking protocol tests to identify potential defects that might increase fall risk. Subtle defects might be difficult to observe. Therefore, objective methods such as Inertial Measurement Units (IMUs) are useful for quantitatively analyzing high-resolution gait characteristics, which help enhance the information content for fall risk assessment by capturing nuances. However, relying solely on IMU-based gait instrumentation has limitations as it does not account for participants’ behavior and environmental details (e.g., obstacles). Video eye-tracking devices might provide additional insights into fall risk assessment by recording head and eye movements, thereby understanding how people navigate the environment based on these movements. Nevertheless, manually evaluating video data to assess head and eye movements is time-consuming and subjective. Thus, there is an urgent need for automated methods, which currently do not exist. This paper proposes a deep learning-based object detection algorithm, VARFA, to instrument visual and video data during walking, complementing instrumented gait analysis.

Source of the Paper

This study was conducted by Jason Moore, Robert Catena, Lisa Fournier, Pegah Jamali, Peter McMeekin, Samuel Stuart, Richard Walker, Thomas Salisbury, and Alan Godfrey. The research originates from the following institutions: the University of Northampton, Washington State University, South Tyneside NHS Foundation Trust, with corresponding author Alan Godfrey. The paper was published in the “Journal of Neuroengineering and Rehabilitation”, 2024, Volume 21, Number 106, and is open-access under the Creative Commons Attribution 4.0 International License.

Research Method

This study selected 20 healthy pregnant women as subjects, using eye trackers to capture environmental/laboratory video data, combined with gait analysis. The proposed VARFA algorithm uses the YOLOv8 model trained on a novel dataset specific to the laboratory environment. The study automatically labeled video data captured by eye trackers and evaluated visual attention and environmental details. According to MAP50 (mean Average Precision at 50%), VARFA achieved very high evaluation metrics (0.93 MAP50) and was capable of real-time processing speed, demonstrating its efficiency and effectiveness in real-world applications.

Research Results

VARFA achieved an average accuracy of 93% in detecting and locating stationary objects (e.g., obstacles in the walking path). Similarly, a U-Net-based trajectory/path segmentation model achieved good evaluation metrics (Intersection over Union of 0.82), indicating a close alignment between predicted and actual walking paths.

Research Conclusion

Instrumented visual analysis improved the efficiency and accuracy of fall risk assessments by evaluating the distribution of visual attention during navigation (i.e., where and when people pay attention to information), thereby broadening the scope of instrumentation in this field. The application of VARFA in instrumented vision promises to complement behavior and environmental data in gait tasks, better informing fall risk assessments.

Research Highlights

The highlights of this study include the development of a novel instrumented method that automates visual attention and processing. The validation of the VARFA algorithm in a laboratory environment demonstrated review efficiency and detail-capturing capability, which is significant for assessing fall risks and improving recovery methods. Additionally, the research provides new directions for applying instrumented vision in everyday life to assess fall risks in the elderly and those with mobility impairments.