Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
32c. Methods of Artificial Intelligence 3
Time:
Friday, 20/Sept/2024:
12:00pm - 1:30pm

Session Chair: Andre Stollenwerk
Session Chair: Thomas Seel
Location: V 47.03

Session Topics:
Methods of Artificial Intelligence

Show help for 'Increase or decrease the abstract text size'
Presentations
12:00pm - 12:12pm
ID: 384
Conference Paper
Topics: Methods of Artificial Intelligence

Robustness of a DenseNet-121 for the Classification of ARDS in Chest X-Rays

Simon Fonck1,2, Sebastian Fritsch2,3,4, Alina Nguyen1, Stefan Kowalewski1, André Stollenwerk1,2

1Embedded Software (Informatik 11), RWTH-Aachen University, Germany; 2Center for Advanced Simulation and Analytics (CASA), Forschungszentrum Jülich, Germany; 3Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany; 4Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany

Research in the field of artificial intelligence (AI) in medicine is increasingly relying on algorithms based on deep learning (DL), especially for radiology. Despite producing promising results, DL models have a major drawback: their reliance on large training datasets. Especially in medicine, large, annotated datasets are hard to obtain, leading

to low robustness and a performance loss when exposed to unseen, new data. To address this problem, our research evaluates how well data augmentation is able to expand the used

dataset and thus improve a DL model. We employ 17 different augmentation methods to test the robustness of a DenseNet-121 trained to classify Acute Respiratory Distress Syndrome (ARDS) in chest X-rays. Our experiments show that while the model has low robustness for augmented test data when trained on unaugmented data, the general performance for ARDS classification can be improved by augmenting the training data. Overall, this demonstrates that data augmentation is beneficial in training AI models for ARDS classification in order to create more robust and generalizable models.

Fonck-Robustness of a DenseNet-121 for the Classification-384_a.pdf


12:12pm - 12:24pm
ID: 323
Conference Paper
Topics: Methods of Artificial Intelligence

A New Receiver Design for Biomedical Magnetic Induction Tomography with a Deep Learning Method

Anna Hofmann, Tatiana Schledewitz, Dirk Rueter, Andreas Sauer

Hochschule Ruhr-West, Germany

Medical imaging is an essential aspect of modern medicine, providing a non-invasive technique for the diagnosis, monitoring and treatment of a wide range of conditions. However, there is a constant need for advancements to reduce radiation exposure, enhance patient comfort, and increase accessibility. As a cheap, contactless and non-hazardous imaging method, Magnetic Induction Tomography (MIT) could offer a new alternative to established imaging methods. First introduced in the early 1990s, MIT is still at the basic research stage and under constant change in the technical requirements. The focus of this study is a recently developed planar setting that offers several advantages, particularly in the image reconstruction of voluminous bodies in a biomedical setting. Due to the novelty of this structure, it is necessary to fundamentally re-examine both the forward and inverse problem of MIT. Here, a deep neural network is used to compare an established receiver setting with a new receiver design to improve reconstruction quality. Additionally, a testing method for new receiver setups is introduced.

Hofmann-A New Receiver Design for Biomedical Magnetic Induction Tomography with a Deep-323_a.pdf


12:24pm - 12:36pm
ID: 243
Conference Paper
Topics: Imaging Technologies and Analysis

Comparison of YOLO and transformer based tumor detection in cystoscopy

Thomas Eixelberger1, Philipp Maisch2, Sebastian Belle3, Maximilian Kriegmair4, Christian Bolenz2, Thomas Wittenberg1

1Friedrich-Alexander- Universität Erlangen-Nürnberg & Fraunhofer IIS, Erlangen Germany; 2Department of Urology, University Hospital Ulm, Ulm, Germany; 3Universitäts-Medizin Mannheim, Mannheim, Germany; 4Urology Clinics Munchen-Planegg, Germany

Background: Bladder cancer (BCa) is the second most common type of cancer in the genitourinary system and causes approximately 165,000 deaths each year. The diagnosis of BCa is primarily done through cystoscopy, which involves visually examining the bladder using an endoscope. Currently, white light cystoscopy is considered the most reliable method for diagnosis. However, it can be challenging to detect and diagnose flat, small, or poorly textured lesions. The study explores the performance of deep learning systems (YOLOv7-tiny, RT-DETR18), originally designed for detecting adenomas in colonoscopy images, when retrained and tested with cystoscopy images. The deep neural network used in the study was pre-trained on 35,699 colonoscopy images (some from Mannheim) and both architectures achieved a F1 score of 0.91 on publicly available colonoscopy datasets. Results: When the adenoma-detection network was tested with cystoscopy images from two sources (Ulm and Erlangen), F1 scores ranging from 0.58 to 0.81 were achieved. Subsequently, the networks were retrained using 12,066 cystoscopy images from Mannheim, resulting in improved F1 scores ranging from 0.77 to 0.85. Conclusion: It could be shown that transformer based networks perform slightly better than YOLOv7-tiny networks, but both network types are feasable for lesion detection in the human bladder. The retraining of the network with additional cystoscopy data led to an improvement in the performance of urinary lesion detection. This suggests that it is possible to achieve a domain-shift with the inclusion of appropriate additional data.

Eixelberger-Comparison of YOLO and transformer based tumor detection-243_a.pdf


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: BMT 2024
Conference Software: ConfTool Pro 2.8.105+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany