If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Radiology is integral to cancer care. Compared to molecular assays, imaging has its advantages. Imaging as a noninvasive tool can assess the entirety of tumor unbiased by sampling error and is routinely acquired at multiple time points in oncological practice. Imaging data can be digitally post-processed for quantitative assessment. The ever-increasing application of Artificial intelligence (AI) to clinical imaging is challenging radiology to become a discipline with competence in data science, which plays an important role in modern oncology. Beyond streamlining certain clinical tasks, the power of AI lies in its ability to reveal previously undetected or even imperceptible radiographic patterns that may be difficult to ascertain by the human sensory system. Here, we provide a narrative review of the emerging AI applications relevant to the oncological imaging spectrum and elaborate on emerging paradigms and opportunities. We envision that these technical advances will change radiology in the coming years, leading to the optimization of imaging acquisition and discovery of clinically relevant biomarkers for cancer diagnosis, staging, and treatment monitoring. Together, they pave the road for future clinical translation in precision oncology.
Fast emerging imaging technology and analytic tools allow radiology to play an increasingly important role in cancer screening, diagnosis staging, response assessment, and prognosis. In contrast to the invasiveness of the histopathological and molecular approaches that can be biased by intratumor heterogeneity, imaging offers a unique path to provide a holistic and dynamic view of disease at the whole organ or whole patient level, moving the assessment of cancer patients toward personalized oncology.
For a decade, we have witnessed the proliferation of radiomics in oncological imaging. Radiomics is “the conversion of images to higher dimensional data and subsequent mining of these data for improved decision support” [
]. Like the digital revolution that reshaped the remainder of our lives, radiomics has the potential to transform radiology. Though quantitative image analysis existed before radiomics, it was carried out sporadically in different clinical applications, usually with a small amount of manually processed imaging features. By contrast, radiomics remolds the imaging analysis by introducing a more robust and universal framework that systematically extracts hundreds or thousands of features of the tumor shape, intensity, and texture for the prediction.
More recently, artificial intelligence (AI), especially deep learning, offers an unprecedented way to interrogate informative imaging patterns beyond radiomics. Distinct from radiomics which heavily relies on empirical knowledge, deep learning provides an end-to-end solution approach that can automatically learn by correlating raw data with ground truth, to eventually acquire sufficiently precise capability to help solve clinical challenges at scale.
In this review, we focus on emerging applications of AI to empower oncologic imaging, ranging from imaging acquisition to cancer screening to treatment planning to response monitor (Fig. 1). An appendix of terminology and concepts has been included for readers who are unfamiliar with AI. We also provide an outlook on the challenges, new paradigms, and future directions of AI application to this discipline.
2. Imaging acquisition optimization
Patients with cancer undergo frequent imaging subjecting them to cumulative contrast doses, radiation, and potentially lengthy exams (if undergoing MRI). Deep neural networks can efficiently map images from one high-dimensional data space to another. This enables various novel applications, in the area of CT-Dose reduction, faster MRI acquisition, and reduced contrast/ radiotracer dosing, all benefits that have the potential to accrue to oncology patients.
2.1 CT-Dose reduction
The US’s annual per capita radiation dose has doubled over the past 15 years, primaraily due to increased CT imaging [
]. Since image quality and signal-to-noise ratio are inversely correlated with radiation dose, it is not possible to arbitrarily reduce the radiation dose per examination. Although several sophisticated techniques have been developed with the aim to preserve image quality while reducing radiation exposure, such as iterative reconstruction, radiation dose remains a concern in young patients (e.g., pediatrics), and/or patients undergoing multiple serial examinations (e.g., cancer survivors on surveillance).
Deep neural networks can map data from one high-dimensional data space to another, e.g., transfer a CT image from the low-dose/high-noise space to a high-dose/low-noise representation. Hence, novel techniques are currently being developed based on deep learning reconstruction (DLR), which have the potential to significantly reduce radiation dose. Two DLR solutions have become FDA cleared and clinically available in 2019 [
]. DLR based reconstruction methods have resulted in lower radiation doses and/or improved image quality, all the while offering a reasonably short reconstruction time. Recently, pilot DLR study reported volumetric tomographic imaging generated based on ultrasparse data sampling (i.e., single projection) and a patient-specific prior [
], which can further reduce the radiation dose if validated.
However, as with any new technique, more research is needed to prove its clinical usefulness, safety, reproducibility and reliability. There is a need for larger and more diverse training and validation datasets for DLRs to improve and validate its generalizability.
2.2 Optimization of MRI acquisition
Magnetic Resonance Imaging (MRI) has a crucial role in Oncologic Imaging. It is a problem-solving tool for lesion characterization, enables local assessment and tumor staging, and with the advent of whole-body MRI it has the potential to become a staging, therapy response assessment and surveillance tool [
]. One of the most challenging issues for patients is long scan time, which can introduce motion artifact and increases cost and discomfort for patients. Recent developments in AI may help to address these issues. For example, deep-learning based techniques have been developed to accelerate scan times in MRI by means of under-sampling, which can be classified into several groups of acceleration techniques: image-based reconstruction, k-space based reconstruction, adversarial networks and super-resolution.
K-space based reconstruction techniques like robust artificial-neural network for k-space interpolation (RAKI) are applied directly on k-space data rather than image data [
]. They consistently outperform traditional parallel imaging techniques in terms of reconstruction speed by a factor of 2–4. Methods that use adversarial networks are being developed to account for loss functions in Convolutional Neural Networks (CNN), like pixel-wise loss that make the images look over-smoothed. However, adversarial networks are notoriously difficult to train and prone to hallucinating realistic-looking imaging features. Hence, they need to be carefully evaluated for every clinical indication. Super-resolution is a deep learning technique that can predict high resolution images from low resolution images. The idea is to accelerate acquisition by obtaining low resolution images and generating high resolution images with DL algorithms. A particularly successful image-based reconstruction method is a variational network (VN) that has shown successful image reconstruction with acceleration factor of four [
]. Even though promising, these techniques are still a subject of active research.
Operationally, challenges exist with dealing with a heterogenous fleet of scanners that may be at different stages in their life cycle. Image quality may vary depending on brand, capabilities and whether the scanner is 1.5 T versus 3 T. Arshad et al., demonstrated that these problems can be addressed with Transfer Learning techniques that showed improved generalizability for: images acquired from scanners with different magnetic field strengths, MR images of different anatomies, and MR images under-sampled by different acceleration factors [
]. This can be particularly valuable in oncologic patients where comparison to prior imaging and consistent protocols are important in restaging and post-treatment scans. When deploying an increasing number of clinical AI algorithms, practices need to pay attention for input data (images) meeting minimum technical specifications by the AI manufacturer. Only then is there a reasonable chance of similar performance of the AI in real world practice as was demonstrated in standalone performance testing prior to FDA clearance. This can be challenging in practices which manage multiple generations of scanners, and diverse imaging protocols and requires attention to achieve reliable results and is not specific to MR but extends to other modalities such as CT, PET/CT etc.
2.3 Reduction of contrast
Patients with cancer undergoing nephrotoxic chemotherapies often present to their imaging appointment with impaired renal function. Although there is no reliable predictor for development of contrast-induced acute kidney injury, volume of contrast has been included in some risk stratification systems [
Imaging plays a key role in early detection of colon, breast and lung cancer. Early detection of cancers in asymptomatic individuals is associated with reduction in mortality. Historically, cancer screening strategies have been controversial due to overall costs and potential for overtreatment [
]. Additionally, health disparities related to under-screened populations contribute to differences in patient outcomes. AI has the potential to address these issues in the future by decreasing costs, improving access to screening, and improving timeliness of results.
Stand-alone deep learning algorithms have shown to be effective in screening for breast cancer with noninferior performance to the average of 101 radiologists [
]. Although many studies have been performed on cancer detection in mammography, there is limited evidence of AI in real-screening settings. In Europe, double reading mammograms is standard of care. A population based screening study showed promise in cancer detection using commercially available AI systems compared to double-reading consensus [
Precise screening AI algorithms have the potential to optimize screening strategies at an individual patient level. Yala, et al. developed a reinforcement learning algorithm to predict follow-up imaging recommendation from an individualized patient risk assessment. The model was more efficient than annual screening by achieving earlier detection per screening cost [
]. By improving the precision of screening strategies, AI has the potential to decrease overall healthcare costs.
Lung cancer screening is a two-step process including nodule detection and malignancy risk assessment. Numerous AI algorithms created for nodule detection have shown to be slightly inferior or equivalent to radiologists at the cost of increase in false positive rates [
]. However, limitations to the study included use of overlapping data sets to train and test the model. In addition, the model did not train based on development of cancer, but instead compared to Lung-RADs assessment. Trojanovski, et al. utilized a two-stage framework to detect and assess malignancy risk of pulmonary nodules with comparable performance to a six-radiologist panel [
], which is particularly important in the setting of the updated lung cancer screening recommendations released by the US Preventive Services Task Force (USPSTF) in 2021, expanding eligibility by lowering the screening age from 55 to 50 years and smoking history from 30 to 20 pack-years. In another study of NLST and Pan-Canadian Early Detection of Lung Cancer, Stephen Lam’s team developed a deep learning model that can accurately predict the risk of suspicious lung nodules growing to lung cancer within a 3-year period based on the radiologist’s CT report and clinical information [
Diagnostic imaging has traditionally played a central role in cancer staging by defining the extent of disease of the primary tumor and identifying local and distant metastases to determine the best treatment plan. Anatomic imaging can assist surgeons in surgical planning and allow radiation oncologists to define radiation fields. Some of the associated image processing tasks can be time consuming, tedious and prone to error. AI has the potential to help with some of these tasks and even support the referring providers use of imaging in pretreatment planning. In addition, some imaging assessments remain objective with variability in reads related to cancer detection, and AI can help to address this variability and elevate the level of care in under resourced areas where subspecialized radiologists may not be available.
In renal tumors, 3D post-processing software utilized to accurately evaluate the anatomy and lesion volumes can be time consuming. CNN can be used to streamline this process and improve accuracy of determining volume of kidneys and renal tumors, important parameters surgeons use to evaluate patients for nephron-sparing interventions [
Detection of brain metastases for radiation planning can be a tedious task, particularly now that stereotactic radiosurgery of individual metastases has become the preferred approach over whole brain radiation [
]. Furthermore, patients are more frequently undergoing multiple courses of stereotactic radiosurgery which adds to the complexity of reading follow-up studies with the need to differentiate between new lesions and treated lesions [
]. Although currently not adapted to clinical radiology workflows, CNN has the potential to facilitate the detection and tracking of brain metastases in the future. In addition, the CNN architecture can facilitate auto-segmentation for radiation oncologists. As in many other body regions, various deep learning algorithms have already achieved human-level performance in segmenting the organ or cancerous lesions, which may ultimately aid with focal therapies. This can dramatically reduce the time a radiation oncologist spends in manually contouring a patient study, increase contour consistency and improve accuracy [
In the past decade, MRI has gained a crucial role in the diagnosis and management of men with suspected prostate cancer owing to improved cancer detection. However, the interpretation of MRI remains highly dependent on the expertise of the radiologist. As a result, there is significant variation in the accuracy of cancer detection across different institutions. The use of artificial intelligence in prostate MRI imaging may help to improve the accuracy of cancer detection, especially in a low-volume/community practice setting and reduce inter-observer variability. So far, AI has been shown to have potential in particular for challenging transition zone lesions [
Beyond the traditional tasks of radiologists, radiomic features coupled with AI can extract and analyze quantitative data to provide tumor characterizations to guide patient treatment, including predicting histologic and molecular subtypes. For example, the ML radiomics model has shown promise in differentiating small cell lung cancer from other lung lesions on CT for pulmonary nodules at least 1 cm in size [
]. Recently, Ma et al. used ML to differentiate between breast cancer molecular subtypes based on mammography and ultrasound. Both clinical data and imaging signs based on the BI-RADS lexicon served as inputs for the ML models [
]. Though genomic biomarkers are commonly used in oncology to determine best treatment pathways, there are limitations because of the need for invasive biopsy and some biopsies may not be technically feasible. Moreover, large data sets with tissue diagnosis as the gold standard can be a challenge in developing biomarkers linked directly to known molecular biomarkers. Transfer learning methods (TLM) can be used to overcome limited data sets. Liu et al. used deep CNN and transfer learning methods to predict Ki-67 status, a biomarker to determine use of neoadjuvant chemotherapy in patients with breast cancer, using multiparametric MRI [
]. Additionally, core biopsy specimens may not be representative of the intra-tumoral heterogeneity. Thus, integrating imaging and molecular biomarkers can potentially offer a comprehensive and deep profiling of individual cancer patients [
]. These features in the future may aid in complex clinical decision making as an imaging biomarker. A prognostic biomarker in oncology is one that can help to identify likelihood of disease recurrence or progression. In breast cancer, multiparametric breast MRI paired with ML has been used to generate a predictive model for prognostic factors in breast cancer [
]. In head and neck cancers, prediction of disease progression is helpful in intensifying therapeutic strategies to high-risk individuals. Patients who are at low risk for progression based on radiomics model may benefit from de-intensifying therapies thereby leading to reduction in morbidity, such as radiation-induced injury to surrounding tissues [
Also, biomarkers have been identified to predict response to specific therapies. In lung cancer patients, ML can predict epidermal growth factor (EGFR) and Kirsten rat sarcoma viral oncogene homologue (KRAS) mutations in non-small cell lung cancer. Cancers with EGFR show higher sensitivity to gefitinib and erlotinib, whereas those with KRAS mutations are prone to drug resistance [
]. A recent radiomic study investigated the predictive value of lesion-wise PET radiomics model in lymphoma patients who were given ibrutinib, outperforming conventional PET metrics (eg, SUVmax, MTV, TLG) for response prediction [
]. Predictive biomarkers in head and neck cancers remain limited. ML-derived biomarkers developed using MRI features have shown good performance in predicting disease progression in nasopharyngeal carcinoma and HPV- associated squamous cell carcinoma [
Immunotherapy is a breakthrough in cancer treatment. However, only a subset of patients receives clinical benefit. Immunotherapy is expensive with potential for toxicity, and stratifying patients prior to therapy may be beneficial. The expression of the programmed cell prorotein-1 ligand (PDL-1) is an important predictive biomarker to determine which patients such receive immunotherapy, however this has not been sufficient or specific enough [
], and requires invasive biopsies. AI may be able aid in the development of imaging biomarkers that may be more specific and less invasive. Radiomics based on extraction of CT features has shown to predict response to immunotherapy in solid tumors including non-small cell lung cancer and melanoma [
]. The development of dual-energy CT (DECT), which has shown to outperform single-energy CT (SECT) in improving visualization of biological processes, expands the potential for extracting radiomic features in oncologic imaging. ML using DECT-specific parameters has been able to predict response in patients with metastatic melanoma prior to initiation of immunotherapy [
More broadly, Wu et al. defined radiological phenotypes based on tumor morphology and spatial heterogeneity, across different imaging modalities such as CT and MRI and cancer types including lung, breast and brain malignancies [
]. This work provides proof of principle for pan-cancer radiomics classification scheme, defining imaging biomarkers that span across tumor types and imaging modalities to determine prognosis and predict response to immunotherapies [
Tumor response assessment based on standardized measurements utilizing criteria such as Response Evaluation Criteria in Solid Tumors (RECIST) and Response Assessment in Neuro-Oncology (RANO) have become defined endpoints for many clinical trials. These criteria are also often the basis of clinical decisions. Tumor measurements can be laborious and prone to intra- and inter-reader variability [
]. In addition, linear measurements of tumor size do not always capture true extent of disease and changes in size due to various morphologies of tumors. Volumetric measurements of tumor would be ideal, but manual segmentation can be time consuming and prone to error. AI can facilitate the assessment of volumetric changes in tumor.
In pleural tumors of the lung, change in size of tumor can be challenging. Deep learning CNN has shown promise in segmenting and calculating tumor volume in malignant pleural mesothelioma with the ability to accurately segment tumor volumes as low as ∼100 cm3 [
]. Similarly, in the brain, assessment of tumor size can be challenging utilizing bi-dimensional measurements as used in RANO. Segmentation of gliomas can be complex with multiple components including the enhancing tumor, necrotic core and surrounding edema. Much work has been done on the 3D segmentation of gliomas for automated disease burden quantification. [
]. The Brain Tumor Segmentation Challenge (BRaTS), which has pooled together training data sets from 19 institutions, has enabled the development of numerous DL algorithms. While many segmentation algorithms have been developed, their integration into clinical workflow remains a challenge. More recently, Lotan et al. developed a DL based clinical workflow for segmentation of gliomas preoperatively and postoperatively [
]. However, there has not been widespread clinical adoption of these AI algorithms.
While volumetric assessment of tumors has been the basis for assessing tumor response, it has been recognized that functional and molecular imaging features may be more precise markers of tumor response [
Integrating tumor and nodal imaging characteristics at baseline and mid-treatment computed tomography scans to predict distant metastasis in oropharyngeal cancer treated with concurrent chemoradiotherapy.
Int. J. Radiat. Oncol. * Biol. * Phys.2019; 104: 942-952
] has also been proposed by explicitly segmenting whole tumors in intrinsic subregions of similar radiographic patterns, which has shown to be an independent prognostic risk factor beyond conventional risk predictors for breast cancer patients after neoadjuvant chemotherapy [
] (Fig. 2). Habitat imaging analysis has also shown novel spatiotemporal response patterns induced by radiotherapy from the primary tumor and nodal regions at baseline and mid-treatment PET/CT scans in head and neck cancers [
]. Further, this quantitative image analysis can help refine delivery of radiation therapy to different parts of the tumor. Habitat imaging, combining multiparametric MRI, or PET/CT to establish quantitative imaging signatures of tumors can help to understand the spatial distribution of tumor subregions. This could allow for targeting aggressive disease with radiation boost to improve local control and mortality.
2.7 Current limitations and future directions
Despite a growing body of evidence supporting the future utility of AI in oncologic imaging and a growing number of oncologic related FDA approved AI algorithms (Fig. 3), clinical applications and adoption have been limited [
]. Access to a diverse and large amount of data remains a critical bottleneck in the creation of robust and clinically useful models. Privacy concerns and regulatory barriers are a major hindrance in the creation of such large datasets. However, this obstacle may be overcome by using local resources to train parts of a centralized model, without the training data actually leaving the local hospital infrastructure using federated learning, which is used in a multinational collaboration to develop CT deep learning model for COVID-19 diagnosis [
]. Nevertheless, many challenges remain in the curation of these multi-institutional datasets, such as standardized inclusion of patient demographics, cancer type, staging, molecular features, as well as standardized imaging acquisitions.
Aside from the need for rigorous clinical validation linked to patient outcomes, integration with current health IT systems and radiology workflow will also be a challenge. Radiology organizations such as the American College of Radiology (ACR) and Radiologic Society of North America (RSNA) have organized demonstrations to engage with relevant stakeholders and develop standards to facilitate this [
]. With ever increasing imaging volumes and workforce shortages, AI that increases the work time in a radiologist’s day will not be adopted. A healthcare system or institution’s ability to integrate new AI software into their IT system may also be constrained based on time and resources available, as well as general challenges of interoperability of production systems in medical informatics.
Interpretability of AI is another emerging area of focus in order to improve transparency and move away from the “black box” models [
]. AI algorithms will become more acceptable and adopted if clinical rationale can support the prediction of AI. Interpretability is also important in order to predict when AI systems might fail. Regulatory bodies have recognized the importance of accountability and recently debated in the EU General Data Protection Regulation [
Finally, the business case for AI will be important to drive adoption. The initial use cases in clinical practice have largely centered on increased efficiency and the potential clinical benefit without well-defined measurable outcomes. For example, DL algorithms that can reduce acquisition time of MRI translates to direct cost savings in scanner and medical staff time, while also improving overall patient experience. More recently, direct reimbursement for AI in the United States has been given to a limited number of AI applications that have either demonstrated significant clinical improvement with the potential to decrease overall healthcare costs or democratized access to care through improving access of a particular service [
]. As we move progressively towards value-based payment models, a similar pathway will likely follow. Oncologic imaging AI algorithms that decrease the overall costs and increase access to screening tools such as mammography or low-dose chest CT may be of value. Imaging biomarkers that reduce overall cost of care by optimizing treatment algorithms leading to improved patient outcomes will also be valued.
AI is augmenting the role of radiologists and radiology in modern oncology, as data science becomes increasingly incorporated into clinical imaging, further enhancing imaging derived insights. The power of AI lies in its ability to reveal previously unknown or even counterintuitive information patterns that might have been overlooked/ missed or hard to perceive by radiologist. Besides imaging, ML is broadly applied to multimodal medical data, including medical text reports, biospecimen-based assays, and monitoring signals. An integrated analysis across different platforms that offers a comprehensive and dynamic view of heterogeneous cancer will further improve the AI model performance. However, the existing ML studies mainly focused on tackling specific, well-defined clinical applications with structured input data and simplified clinical endpoints, known as narrow AI. Given the complex nature of oncology, generalizable AI algorithms built on multimodal real-world data will become the next-generation approach to make real clinical impact. However, it is worth mentioning that the deployment of these technological advancements is complex and may take decades. More importantly, radiologists can and will play a leadership role by directing ongoing AI research efforts to address the most pressing clinical challenges rather than comparing AI systems versus human experts. These new algorithms will be welcomed and adopted to benefit cancer patient management.
CRediT authorship contribution statement
Melissa Chen: Conceptualization, Writing – original draft, Admir Terzic: Writing – original draft, Anton Becker: Writing – review and editing, Jason Johnson: Writing – review and editing, Carol Wu: Writing – review and editing, Christoph Wald: Writing – review and editing, Jia Wu: Conceptualization, Writing – original draft, Writing – review and editing.
Integrating tumor and nodal imaging characteristics at baseline and mid-treatment computed tomography scans to predict distant metastasis in oropharyngeal cancer treated with concurrent chemoradiotherapy.
Int. J. Radiat. Oncol. * Biol. * Phys.2019; 104: 942-952