Institut für Radiologie und Kinderradiologie
Charité – Universitätsmedizin Berlin
Campus Mitte
Luisenstr. 13, Floor 3, Room 03004
10117 Berlin
Telephone: +49 30 450 527 365
Email: nader.aldoj@charite.de
Deep learning of MRI for detection and grading of prostate cancer
PIs: Dewey, Schäffter, Kutyniok
Application Area: Cancer
Modality: MR
Background: The most common cancer type among Male population worldwide is prostate cancer. Prostate cancer can be diagnosed and graded with many different ways, e.g. Gleason score or MRI/US images, where each of those methods has its own advantages and disadvantages. For instance, taking a biopsy and using Gleason score to assess the severity and significance comes along with many complications because taking a biopsy is an invasive and painful procedure and has many complications, not to mention that the pathology assessment process in all methods is very subjective and prone to human error and misdiagnoses.
Aim: The essential aim of this project is to provide a better way of prostate cancer assessment that gets rid of the biopsy complications, the subjective assessment and replace it with an automatic- non-invasive method for localization and grading of the lesion from MRI images.
Methods: For the last years, deep learning has become a famous and promising field in recognition and classification of natural objects. For instance, convolution neural networks has the power to build up hierarchically very complex features of the object(s) being tackled and fits a good classification model that sometimes is able to classify things that human beings might miss.
The idea is to use such a complex network on cancer/non-cancer in addition to clinically significant/non-significant lesions in prostate MRI images so that the network can fit a hierarchical model which has the ability to extract high level features of the lesions automatically and then assign a class depending on the annotation maps which are provided beforehand in the training step. Once the model reaches good fitting parameters, then it is able to predict the lesion localization and grading with a certain value of accuracy which in turn depends on many parameters, e.g. the number of training images, the heterogeneity of the lesion, the accuracy of the annotation maps and many more. This Model, once achieved, saves time, efforts and avoids the subjectivity of human examiners.