Imaging Data, Tools Developed for Heart Disease Research
By MedImaging International staff writers
Posted on 25 Nov 2014
A virtual machine is being developed to optimize reproducibility and data availability to assess and enhance analyses of magnetic resonance imaging (MRI) myocardial perfusion data.Posted on 25 Nov 2014
The project was described November 11, 2014, in the open access, open data journal GigaScience. The study’s researchers, from the Universidad Politécnica de Madrid (Spain) and the US National Institutes of Health (Bethesda, MD, USA) provide an excellent example of open data sharing to help build these exact tools: an abundance of patient imaging data. To enable reproducible comparisons between new tools, the researchers and the journal are publishing and packaging the data combined with tools, scripts, and the software required to conduct the research. This is available for download from GigaScience’s GigaDB database as a “virtual hard disk” that will specifically allow researchers to directly run the experiments themselves and to add their own annotations to the data set.
The most typical cause of heart attacks is coronary heart disease. Diagnosis is vital to beginning treatment for preventing such events. One useful tool in the fight against this leading killer is MRI, which allows the direct examination of blood flow to the myocardium of the heart. However, for this perfusion analysis technique to be the most effective requires compensation for the breathing motion of the patient, which is done using complex image processing methods. Therefore, there is a need to improve these tools and algorithms. The key to achieving things is the availability of large publicly available MRI datasets to allow testing, optimization and development of new methods.
As one potential user of these resources, Prof. Alistair Young, technical director of the Auckland Magnetic Resonance Research Group (New Zealand), commented, “Very large amounts of medical imaging data are now becoming available through registries and large population studies. Well-validated, automated methods are required to derive maximum benefit from such resources. The paper by Wollny and Kellman exemplifies how data and algorithm sharing can advance the field by providing a platform by which existing methods can be tested and new methods validated against existing benchmarks. Such benchmarking datasets are essential to advance the field through objective metrics and standards.”
Having everything compiled in a virtual machine also made things easier during the scientific peer-review and publication process, because the settings, packages, and file locations were already set up in a working configuration. One of the people performing this testing process, Dr. Robert Davidson, a data scientist at GigaScience, stated, “Actually testing the code during review is sadly almost a novel concept and one that needs to roll out as a standard. But even more: if it's easy for the reviewers, it’s easy for the community to use too.”
As well as being important for improving the diagnosis for the number one cause of death worldwide, the continuing upsurge in retractions of published scientific articles, makes the addition of direct means to improve article reproducibility is essential, both for the ability to be able to believe in the current findings, on which future studies are built, and to prevent the public losing conviction in the research community they fund. Publishing a virtual machine, an interactive and executable publication provides an example to the scientific community and test case demonstrating a potential new type of scholarly yield, according to the investigators.
Related Links:
Universidad Politécnica de Madrid
US National Institutes of Health