Startseite » Forschung und Wissenschaft » Nachwuchsforschergruppen » Deep learning and big data

Deep learning and big data

Titel: Deep Learning and Big Data Mining in Vestibular Multi-modal Imaging and Spatio-temporal Sensor Data

Team members: Dr. rer. nat. Seyed-Ahmad Ahmad; Dr. cand. Gerome Vivar

 

PI: Dr. rer. nat. Seyed-Ahmad Ahmadi

Email: aahmadi@med.lmu.de

PI personal website:  http://campar.in.tum.de/Main/AhmadAhmadi

Open-source code and material: https://github.com/pydsgz/About

 

Research topics

  • Multimodal medical data science and machine/deep learning
  • Unsupervised and semi-supervised representation learning
  • Medical spatio-temporal signal modeling and analysis
  • Medical image segmentation and analysis

 

Abstract:

The nature of vestibular stimuli is multimodal. This affects both the anatomical and the functional organization of the central vestibular system, where multiple sensory inputs converge at all levels to provide one exocentric percept. This makes the vestibular sense very distinct from other sensory modalities. Methodical approaches dealing with the vestibular system ideally need to take this multidimensional multimodality into account.

This project deals with translational research on state-of-the-art data science and machine/deep learning methods into medical research and clinical routine. Advanced machine and deep Learning for pattern recognition can deal with its various patient features using vestibular multi-modal imaging and sensor data. The overall approach is holistic, and tries to incorporate as complete a picture of patient examination information as possible. Where necessary, we develop modality-specific tools based on deep learning, to make the assessment of patient features more robust and accurate than with conventional tools, e.g. for video-oculography using DeepVOG. In other domains, where there are no established tools yet, we investigate deep learning techniques that can automatically identify a few key characteristics that describe the patient behavior optimally, e.g. in the area of gait characterization with low-budget video sensor data. In MR imaging, we develop the necessary tools to quantify the structural and functional properties of the vestibular system, e.g. with novel templates and atlases, and segmentation tools. An overarching theme in this working group is the fusion of vestibular patient examination data, including patient history, video-oculography, posturography, gait examination and even MR imaging into one holistic, multimodal patient representation. Using this representation, and a large body of patient examination data from the DSGZ inpatient clinic, we work towards Clinical Decision Support Systems (CDSS), e.g. in the form of computer-aided diagnosis (CADx).

img_YSGII6_Ahmadi

 
EN

Terminvereinbarung

Terminvereinbarung für Patienten
Telefon:
089 / 4400-76980
Mo- Do 8.00 -12.30 und
13.30 - 15.00 Uhr

Fr 8.00-12.30 Uhr

Email:
terminvereinbarung-ifb@med.uni-muenchen.de

BMBF-Logo

Spendenkonto

LMU-Klinikum
IBAN DE38 7005 0000 0002 0200 40
BIC BYLADEMMXXX
Unbedingt angeben: zu Gunsten DSGZ – Finanzstelle 80221002, Spende