Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S5: MS13 - 1: Bioengineering in Orthopaedics: Current Trends, Challenges, and Clinical Relevance
Time:
Wednesday, 10/Sept/2025:
9:00am - 10:20am

Session Chair: Emiliano Schena
Session Chair: Arianna Carnevale
Location: Room CB26B


External Resource: https://iccb2025.org/programme/mini-symposia
Show help for 'Increase or decrease the abstract text size'
Presentations
9:00am - 9:20am

Automated segmentation of long bones in ultrasound images: comparing segmentation performance of four state-of-the-art clinically used pretrained convolutional neural networks

I. Yang1, R. Buchanan2, K. Nazarpour3, A. H. Simpson4

1University of Western Ontario, Canada; 2University of Waterloo; 3University of Edinburgh; 4Queen Mary University of London

Bone fractures are a common and a major cause of disability and death worldwide, and up to 10% of all fractures fail to heal normally (called “fracture non-unions”). Ultrasound imaging is safe, cost-effective, compatible with metal implants, and widely used to diagnose soft tissue injuries. Importantly, it holds significant potential to revolutionize fracture care by enabling early and routine monitoring and assessment of fracture healing, which for patients at high-risk of non-union, could mean earlier detection and clinical intervention of poor healing fractures that would prevent advanced clinical state and prolonged patient suffering, and reduce extensive waiting times for treatment, which would reduce costs for the healthcare system and patients. However, as ultrasound imaging is not routinely used in orthopedic clinics, barriers include difficulty acquiring and interpreting ultrasound images, lack of standardized scanning guidance and unclear terminology for bone and clinically important features of fracture healing. Previous efforts to use ultrasound imaging for musculoskeletal applications have demonstrated large intra‐examiner and inter‐examiner reliability variance for semi-quantitative and continuous measures, and image processing was slow as segmentation is often manual (or semi-automated). Therefore, there exists strong need to develop automated approaches to reduce the observed variance and to automate the segmentation procedure. We have established terminology and reporting guidelines for bone healing on ultrasound imaging; including expert agreement on key bone healing features from orthopaedic surgeons, radiographers and medical physics experts, and we have developed a Python-based, multi-label classification machine learning (ML) algorithm using these consensus labels to aid clinical interpretation of ultrasound scans. We have assess 2D US image label classification performance of four pre-trained Convolutional Neural Networks (CNNs) that have previously demonstrated good bone segmentation ability within US imaging in recent studies (VGG-16, VGG-19, Resnet50 and deeplabv3) using a dataset consisting of 734 2D ultrasound images of two superficial and frequently fractured long bones, the tibia (3 x longitudinal scan sweeps, 366 2D images) and the clavicle (3 x longitudinal scan sweeps, 368 2D images) acquired using an ultrasound scanner (L15 HD3, Clarius, Vancouver, BC, Canada). While all four tested algorithms performed with relatively high precision, recall/sensitivity and F1 scores, the deeplabv3 algorithm demonstrated the best segmentation performance across all metric parameters for bone segmentation on 2D US imaging, followed closely by Resnet50, then VGG-19 and VGG-16. Augmenting the dataset resulted in notable improvements to mean class accuracy particularly for VGG-16 and deeplabv3 and marginally decreased performance for VGG-19 and Resnet50. Overall, dataset augmentation resulted in marginal overall improvements. This is the first study to compare the performance of four different and state-of-the-art clinically used CNNs to identify the algorithm demonstrating the best segmentation performance for bone on US imaging. The results presented are significant for inter-disciplinary biomedical engineers to design efficient, accurate and robust automated ML-based segmentation models for clinical orthopedic applications.



9:20am - 9:40am

A 3D-printed wearable sensor based on fiber Bragg gratings for shoulder motion monitoring

A. Dimo1,2, U. G. Longo1,2, E. Schena1, D. Lo Presti1

1Università Campus Bio-Medico di Roma, Rome, Italy; 2Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy

INTRODUCTION

Shoulder injuries, particularly those affecting the rotator cuff (RC), are common and result from repetitive movements, excessive strain, or trauma, leading to pain, muscle weakness, and reduced joint mobility. Rehabilitation, especially post-surgical, is essential to restore the range of motion (ROM) and prevent complications. However, recovery assessment is often based on subjective medical evaluations, limiting accuracy. While motion capture (MOCAP) systems offer more objective assessments, they are expensive and not easily accessible for routine clinical monitoring. An accessible system is needed to track shoulder movements during rehabilitation, enhancing therapy programs with real-time feedback.

METHODS

This study was funded by the European Union - Next Generation EU - PNRR M6C2 - Investment 2.1 Enhancement and strengthening of biomedical research in the National Health Service (Project No. PNRR-MAD-2022-12376080 - CUP: F83C22002450001). The study developed a wearable sensor based on fiber Bragg gratings (FBG) and 3D printing to monitor shoulder movements objectively and continuously. Made of thermoplastic polyurethane (TPU), known for its flexibility and durability, the sensor incorporates an embedded FBG sensitive to strain and temperature variations. The TPU substrate design, shaped like a "dog bone," optimizes strain transmission in the sensor's central section.

Two stretchable anchoring bands, integrated during printing, ensure stability and ease of use.

Tests evaluated strain sensitivity and hysteresis error using a tensile testing machine and temperature sensitivity through a laboratory oven. A preliminary test on a healthy volunteer assessed sensor performance in monitoring shoulder movements at different angles (0-30°; 0°-60°; 0°-90°) and speeds (0°-90° in 3s and 6s).

RESULTS

The sensor exhibited an average strain sensitivity (Sε) of 1.45 nm/mε and a temperature sensitivity (ST) of 0.02 nm/°C, slightly higher than a bare FBG due to TPU's thermal expansion. However, the hysteresis error was high (51%), indicating a nonlinear response under dynamic conditions.

During preliminary tests on a healthy volunteer, the sensor detected shoulder movements in the sagittal plane, with output variations (ΔλB) proportional to ROM (0.22 nm at 30°, 0.46 nm at 60°, 0.77 nm at 90°) and speed (6-second cycles: mean ΔλB of 0.784 nm ± 0.028 nm; 3-second cycles: mean ΔλB of 0.762 nm ± 0.029 nm). These findings demonstrate the sensor's potential for real-time monitoring and clinical applications.

CONCLUSIONS

The developed sensor combines FBG and 3D printing to monitor shoulder movements during rehabilitation, offering high sensitivity and an ergonomic design. Preliminary results are promising but highlight limitations, such as high hysteresis error and TPU's thermal influence.

Future developments aim to improve linearity, reduce thermal effects, and lower hysteresis, as well as test the device on a larger sample. Additionally, sensor evaluations will be extended to shoulder movements in the frontal and scapular planes to provide a more comprehensive analysis in real-world conditions.

This sensor represents an affordable, portable, and non-invasive solution with the potential to revolutionize rotator cuff rehabilitation and other medical and sports applications, providing objective data to optimize therapy pathways and improve functional outcomes.



9:40am - 10:00am

An AI-integrated method for robot-assisted shoulder rehabilitation

A. Puglisi1, E. Schena2, A. Carnevale3, A. Scandurra1, G. Roccaforte1, M. V. Maiorana1, U. G. Longo3, G. Pioggia1

1Institute for Biomedical Research and Innovation (IRIB), National Research Council of Italy (CNR), Messina, Italy; 2Department of Engineering, Laboratory of Measurement and Biomedical Instrumentation, Università Campus Bio‐ Medico di Roma, Rome, Italy; 3Fondazione Policlinico Universitario Campus Bio‐Medico, Roma, Italy

Purpose
A recent pilot study (Raso et al., J Exp Orthop, 2024) highlighted the potential of the NAO humanoid robot in guiding shoulder rehabilitation exercises, demonstrating promising results in improving exercise consistency and patient engagement. However, that work relied heavily on an external optical motion‐capture laboratory to track shoulder kinematics, creating barriers to widespread clinical and home‐based adoption. Building on those findings, this new study explores an AI‐driven approach to embed motion‐tracking capability directly within the NAO robot using its integrated camera. Our overarching goal is to remove the reliance on specialized motion‐capture labs and thereby extend the possible reach of robotic rehabilitation—whether in smaller outpatient clinics or in patients’ homes.

Methods
We modified our existing NAO‐based rehabilitation protocol, previously validated with an external motion‐capture system, to incorporate a custom AI module for onboard motion capture. In this updated configuration, NAO uses its camera feed to perform live pose estimation, focusing on key landmarks around the shoulder complex. A convolutional neural network (CNN) processes these video frames in real time, estimating the user’s joint angles and range of motion (ROM) without external sensors. The robot then provides adaptive, step‐by‐step verbal and gestural cues to guide flexion–extension, external rotation, and internal rotation exercises at varying speeds. Preliminary data collection involved a small sample of healthy individuals replicating the earlier lab‐based exercises. By comparing NAO’s internal AI‐generated kinematics to “gold standard” motion‐capture metrics, we qualitatively assessed accuracy, ease of use, and potential for clinical integration.

Results
Preliminary analyses suggest that NAO’s onboard AI system can capture and estimate shoulder ROM with moderate fidelity relative to the optical laboratory standard—particularly within mid‐range movement arcs crucial for early to mid‐stage rehabilitation. While greater deviations arose at the extremes of the ROM, the absolute mean errors observed in pilot sessions remain within clinically acceptable thresholds for guiding exercise performance. Participants reported a high level of confidence in NAO’s real‐time feedback and found the single‐unit robotic system easier to set up than traditional motion‐capture equipment. These early results mirror the positive engagement documented in our earlier study, indicating that the robot‐patient interaction is maintained without needing an external camera system.

Conclusions
By integrating an AI‐powered motion‐tracking module directly into NAO, we build on the success of our prior study and advance toward a more accessible form of robot‐assisted therapy for shoulder rehabilitation. Although further refinements and larger clinical trials are needed—especially to verify accuracy in individuals with shoulder pathologies—our findings underscore the potential of a “one‐system” solution to deliver robust rehabilitation guidance in non‐specialized settings. If ongoing development continues to validate these early results, clinicians could deploy NAO in small outpatient clinics or patients’ homes, reducing the need for fully equipped motion‐analysis laboratories. This shift toward accessible AI‐driven robotics could significantly expand the scope of shoulder rehabilitation, improving patient adherence and long‐term outcomes while decreasing the burden on clinical facilities.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ICCB 2025
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany