Brought to you by
2018 ASM / R-0041
Wiley-Blackwell Best Exhibit Award, Radiology
A six-year Australian and New Zealand experience of artificial intelligence techniques applied to MRI liver data – tricks and traps
Congress: 2018 ASM
Poster No.: R-0041
Type: Educational Exhibit
Keywords: Computer Applications-Detection, diagnosis, Neural networks, Computer applications, Image verification
Authors: M. Blake1, T. Cooper2, S. Khorshid1, T. St Pierre1; 1WA/AU, 2NZ
DOI:10.1594/ranzcr2018/R-0041

Background

 

The following terms are used either throughout the paper or are essential background knowledge to an understanding of the methodologies relating to image processing and computer based artificial intelligence:

 

 

Machine Learning is the use of algorithms to allow computers to learn models from data without being explicitly programmed.

 

 

Support Vector Machines – use supervised learning methods (see below) for classification. The SVM is presented with examples of images that have been classified into one of two categories.  The SVM builds a model that predicts whether new images fall into one category or the other. In part one of the Procedure Details section, we trained an SVM to categorize T1 VIBE images of the liver into high or low-grade fibrosis based on training data from biopsies.

 

 

Clustering – uses unsupervised learning methods (see below) to assign a set of observations to subsets (called clusters) so that observations within the same cluster are similar according to some pre-designated criterion or criteria, while observations drawn from different clusters are dissimilar.  We tried this approach to find features associated with high and low-grade fibrosis in MR liver images.

 

 

Artificial Neural Networks – are software structures that mimic aspects of biological neural networks.  Layers of nodes are connected by “axons” with weightings. Data are input to the “top” layer and output is retrieved from the “bottom” layer.  By training the network with multiple examples of input with known outputs, the weightings of the “axons” are adjusted such that a trained network can receive an input dataset and predict the output from the trained weightings. (Further described in parts two and three of the Procedure Details section)

 

 

Deep Learning – uses artificial neural networks with extra layers of nodes. Extra computing power is required for deep learning. We built a computer specifically for the task (described in part three of the Procedure Details section).  

 

 

Transfer Learning - is a method of training an artificial neural network for a particular task using training data from another task. For example, an artificial neural network could be trained to recognise different makes and models of cars from the thousands of examples on the internet.  The trained network can then be further trained to recognise different makes and models of motorcycle with a much smaller number of samples but with a high level of success. This approach can be useful when you do not have a large dataset for training.

 

 

Supervised learning [1] takes an input variable and an output variable and uses an algorithm to learn the mapping function from the input to the output. The aim is to approximate the mapping function so when new input data is presented the model can predict the output variable for that data. The learning stops when the algorithm achieves an acceptable level of performance. All the data are labelled and the algorithm learns to predict the output from the input data.

 

 

Unsupervised learning [1] is used for input data where there is no corresponding output variable. The goal of unsupervised learning is to model the underlying structure or distribution in the data to learn more about the data. The algorithm is left to its own devices to discover and present the interesting structure in the data. This can be grouped into a clustering problem where you want to discover the inherent groupings in the data or an association problem where you want to discover rules that describe large portions of the data. All data are unlabelled and the algorithms learn the inherent structure from the input data.

 

 

Semi-supervised refers to when some data are labelled but most of them are unlabelled and a mixture of supervised and unsupervised techniques can be used.

 

POSTER ACTIONS Add bookmark Contact presenter Send to a friend Download pdf
SHARE THIS POSTER
2 clicks for more privacy: On the first click the button will be activated and you can then share the poster with a second click.
This website uses cookies. Learn more