Advertisement

Artificial intelligence in oncologic imaging

Open AccessPublished:September 29, 2022DOI:https://doi.org/10.1016/j.ejro.2022.100441

      Abstract

      Radiology is integral to cancer care. Compared to molecular assays, imaging has its advantages. Imaging as a noninvasive tool can assess the entirety of tumor unbiased by sampling error and is routinely acquired at multiple time points in oncological practice. Imaging data can be digitally post-processed for quantitative assessment. The ever-increasing application of Artificial intelligence (AI) to clinical imaging is challenging radiology to become a discipline with competence in data science, which plays an important role in modern oncology. Beyond streamlining certain clinical tasks, the power of AI lies in its ability to reveal previously undetected or even imperceptible radiographic patterns that may be difficult to ascertain by the human sensory system. Here, we provide a narrative review of the emerging AI applications relevant to the oncological imaging spectrum and elaborate on emerging paradigms and opportunities. We envision that these technical advances will change radiology in the coming years, leading to the optimization of imaging acquisition and discovery of clinically relevant biomarkers for cancer diagnosis, staging, and treatment monitoring. Together, they pave the road for future clinical translation in precision oncology.

      1. Introduction

      Fast emerging imaging technology and analytic tools allow radiology to play an increasingly important role in cancer screening, diagnosis staging, response assessment, and prognosis. In contrast to the invasiveness of the histopathological and molecular approaches that can be biased by intratumor heterogeneity, imaging offers a unique path to provide a holistic and dynamic view of disease at the whole organ or whole patient level, moving the assessment of cancer patients toward personalized oncology.
      For a decade, we have witnessed the proliferation of radiomics in oncological imaging. Radiomics is “the conversion of images to higher dimensional data and subsequent mining of these data for improved decision support” [
      • Gillies R.J.
      • Kinahan P.E.
      • Hricak H.
      Radiomics: images are more than pictures, they are data.
      ]. The image features can be extracted from specific area of interest which can be the entire tumor or sub volume within tumors or habitats [
      • Gillies R.J.
      • Kinahan P.E.
      • Hricak H.
      Radiomics: images are more than pictures, they are data.
      ]. Like the digital revolution that reshaped the remainder of our lives, radiomics has the potential to transform radiology. Though quantitative image analysis existed before radiomics, it was carried out sporadically in different clinical applications, usually with a small amount of manually processed imaging features. By contrast, radiomics remolds the imaging analysis by introducing a more robust and universal framework that systematically extracts hundreds or thousands of features of the tumor shape, intensity, and texture for the prediction.
      More recently, artificial intelligence (AI), especially deep learning, offers an unprecedented way to interrogate informative imaging patterns beyond radiomics. Distinct from radiomics which heavily relies on empirical knowledge, deep learning provides an end-to-end solution approach that can automatically learn by correlating raw data with ground truth, to eventually acquire sufficiently precise capability to help solve clinical challenges at scale.
      In this review, we focus on emerging applications of AI to empower oncologic imaging, ranging from imaging acquisition to cancer screening to treatment planning to response monitor (Fig. 1). An appendix of terminology and concepts has been included for readers who are unfamiliar with AI. We also provide an outlook on the challenges, new paradigms, and future directions of AI application to this discipline.
      Fig. 1
      Fig. 1Emerging AI applications in oncologic imaging are seen in four broad categories: Acquisition optimization, cancer screening, tumor response assessment and treatment planning.

      2. Imaging acquisition optimization

      Patients with cancer undergo frequent imaging subjecting them to cumulative contrast doses, radiation, and potentially lengthy exams (if undergoing MRI). Deep neural networks can efficiently map images from one high-dimensional data space to another. This enables various novel applications, in the area of CT-Dose reduction, faster MRI acquisition, and reduced contrast/ radiotracer dosing, all benefits that have the potential to accrue to oncology patients.

      2.1 CT-Dose reduction

      The US’s annual per capita radiation dose has doubled over the past 15 years, primaraily due to increased CT imaging [
      • Nagayama Y.
      • et al.
      Deep learning–based reconstruction for lower-dose pediatric CT: technical principles, image characteristics, and clinical implementations.
      ]. Since image quality and signal-to-noise ratio are inversely correlated with radiation dose, it is not possible to arbitrarily reduce the radiation dose per examination. Although several sophisticated techniques have been developed with the aim to preserve image quality while reducing radiation exposure, such as iterative reconstruction, radiation dose remains a concern in young patients (e.g., pediatrics), and/or patients undergoing multiple serial examinations (e.g., cancer survivors on surveillance).
      Deep neural networks can map data from one high-dimensional data space to another, e.g., transfer a CT image from the low-dose/high-noise space to a high-dose/low-noise representation. Hence, novel techniques are currently being developed based on deep learning reconstruction (DLR), which have the potential to significantly reduce radiation dose. Two DLR solutions have become FDA cleared and clinically available in 2019 [
      • Singh R.
      • et al.
      Artificial intelligence in image reconstruction: the change is here.
      ]. DLR based reconstruction methods have resulted in lower radiation doses and/or improved image quality, all the while offering a reasonably short reconstruction time. Recently, pilot DLR study reported volumetric tomographic imaging generated based on ultrasparse data sampling (i.e., single projection) and a patient-specific prior [
      • Shen L.
      • Zhao W.
      • Xing L.
      Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning.
      ], which can further reduce the radiation dose if validated.
      However, as with any new technique, more research is needed to prove its clinical usefulness, safety, reproducibility and reliability. There is a need for larger and more diverse training and validation datasets for DLRs to improve and validate its generalizability.

      2.2 Optimization of MRI acquisition

      Magnetic Resonance Imaging (MRI) has a crucial role in Oncologic Imaging. It is a problem-solving tool for lesion characterization, enables local assessment and tumor staging, and with the advent of whole-body MRI it has the potential to become a staging, therapy response assessment and surveillance tool [
      • Summers P.
      • et al.
      Whole-body magnetic resonance imaging: technique, guidelines and key applications.
      ]. One of the most challenging issues for patients is long scan time, which can introduce motion artifact and increases cost and discomfort for patients. Recent developments in AI may help to address these issues. For example, deep-learning based techniques have been developed to accelerate scan times in MRI by means of under-sampling, which can be classified into several groups of acceleration techniques: image-based reconstruction, k-space based reconstruction, adversarial networks and super-resolution.
      K-space based reconstruction techniques like robust artificial-neural network for k-space interpolation (RAKI) are applied directly on k-space data rather than image data [
      • Akçakaya M.
      • et al.
      Scan‐specific robust artificial‐neural‐networks for k‐space interpolation (RAKI) reconstruction: database‐free deep learning for fast imaging.
      ]. They consistently outperform traditional parallel imaging techniques in terms of reconstruction speed by a factor of 2–4. Methods that use adversarial networks are being developed to account for loss functions in Convolutional Neural Networks (CNN), like pixel-wise loss that make the images look over-smoothed. However, adversarial networks are notoriously difficult to train and prone to hallucinating realistic-looking imaging features. Hence, they need to be carefully evaluated for every clinical indication. Super-resolution is a deep learning technique that can predict high resolution images from low resolution images. The idea is to accelerate acquisition by obtaining low resolution images and generating high resolution images with DL algorithms. A particularly successful image-based reconstruction method is a variational network (VN) that has shown successful image reconstruction with acceleration factor of four [
      • Johnson P.M.
      • Recht M.P.
      • Knoll F.
      Improving the speed of MRI with artificial intelligence.
      ]. Even though promising, these techniques are still a subject of active research.
      Operationally, challenges exist with dealing with a heterogenous fleet of scanners that may be at different stages in their life cycle. Image quality may vary depending on brand, capabilities and whether the scanner is 1.5 T versus 3 T. Arshad et al., demonstrated that these problems can be addressed with Transfer Learning techniques that showed improved generalizability for: images acquired from scanners with different magnetic field strengths, MR images of different anatomies, and MR images under-sampled by different acceleration factors [
      • Johnson P.M.
      • Recht M.P.
      • Knoll F.
      Improving the speed of MRI with artificial intelligence.
      ,
      • Arshad M.
      • et al.
      Transfer learning in deep neural network based under-sampled MR image reconstruction.
      ]. This can be particularly valuable in oncologic patients where comparison to prior imaging and consistent protocols are important in restaging and post-treatment scans. When deploying an increasing number of clinical AI algorithms, practices need to pay attention for input data (images) meeting minimum technical specifications by the AI manufacturer. Only then is there a reasonable chance of similar performance of the AI in real world practice as was demonstrated in standalone performance testing prior to FDA clearance. This can be challenging in practices which manage multiple generations of scanners, and diverse imaging protocols and requires attention to achieve reliable results and is not specific to MR but extends to other modalities such as CT, PET/CT etc.

      2.3 Reduction of contrast

      Patients with cancer undergoing nephrotoxic chemotherapies often present to their imaging appointment with impaired renal function. Although there is no reliable predictor for development of contrast-induced acute kidney injury, volume of contrast has been included in some risk stratification systems [
      • Faucon A.-L.
      • Bobrie G.
      • Clément O.
      Nephrotoxicity of iodinated contrast media: from pathophysiology to prevention strategies.
      ]. In addition, repeated gadolinium administration has been linked to retention in the brain parenchyma [
      • Dillman J.R.
      • Davenport M.S.
      Gadolinium retention — 5 years later….
      ]. Several techniques using deep learning are being developed to reduce the dose/volume of contrast media, which may enable lower costs and lower adverse event rates.
      Haubold et al., for example, proposed a deep learning algorithm with a generative adversarial network which reduced CT contrast media dose in up to 50% while preserving image and diagnostic accuracy [
      • Haubold J.
      • et al.
      Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network.
      ]. Gong et al. reported up to 90% contrast media reduction with preserved image quality using deep learning models [
      • Gong E.
      • et al.
      Deep learning enables reduced gadolinium dose for contrast‐enhanced brain MRI.
      ]. In another study, this group demonstrated that deep learning can enhance noisy PET images acquired with 4-times fewer counts for clinical purposes in a blinded, multicenter study [
      • Chaudhari A.S.
      • et al.
      Low-count whole-body PET with deep learning in a multicenter and externally validated study.
      ].

      2.4 Screening

      Imaging plays a key role in early detection of colon, breast and lung cancer. Early detection of cancers in asymptomatic individuals is associated with reduction in mortality. Historically, cancer screening strategies have been controversial due to overall costs and potential for overtreatment [
      • Passiglia F.
      • et al.
      Benefits and harms of lung cancer screening by chest computed tomography: a systematic review and meta-analysis.
      ,
      • Welch H.G.
      • et al.
      Breast-cancer tumor size, overdiagnosis, and mammography screening effectiveness.
      ]. Additionally, health disparities related to under-screened populations contribute to differences in patient outcomes. AI has the potential to address these issues in the future by decreasing costs, improving access to screening, and improving timeliness of results.
      Stand-alone deep learning algorithms have shown to be effective in screening for breast cancer with noninferior performance to the average of 101 radiologists [
      • Rodriguez-Ruiz A.
      • et al.
      Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists.
      ] and even outperforms a handful of radiologists [
      • McKinney S.M.
      • et al.
      International evaluation of an AI system for breast cancer screening.
      ]. Although many studies have been performed on cancer detection in mammography, there is limited evidence of AI in real-screening settings. In Europe, double reading mammograms is standard of care. A population based screening study showed promise in cancer detection using commercially available AI systems compared to double-reading consensus [
      • Larsen M.
      • et al.
      Artificial intelligence evaluation of 122 969 mammography examinations from a population-based screening program.
      ]. AI has also been utilized to filter through normal digital breast tomosynthesis (DBT) studies to reduce screening workloads while improving diagnostic accuracy in a simulated workflow [
      • Shoshan Y.
      • et al.
      Artificial intelligence for reducing workload in breast cancer screening with digital breast tomosynthesis.
      ]. This could help to reduce the significant workload and preserve or enhance access to mammography in times of workforce challenges and recently documented burnout amongst breast imagers in the US[
      • Parikh J.R.
      • Sun J.
      • Mainiero M.B.
      Prevalence of burnout in breast imaging radiologists.
      ].
      Precise screening AI algorithms have the potential to optimize screening strategies at an individual patient level. Yala, et al. developed a reinforcement learning algorithm to predict follow-up imaging recommendation from an individualized patient risk assessment. The model was more efficient than annual screening by achieving earlier detection per screening cost [
      • Yala A.
      • et al.
      Optimizing risk-based breast cancer screening policies with reinforcement learning.
      ]. By improving the precision of screening strategies, AI has the potential to decrease overall healthcare costs.
      Lung cancer screening is a two-step process including nodule detection and malignancy risk assessment. Numerous AI algorithms created for nodule detection have shown to be slightly inferior or equivalent to radiologists at the cost of increase in false positive rates [
      • Schreuder A.
      • et al.
      Artificial intelligence for detection and characterization of pulmonary nodules in lung cancer CT screening: ready for practice?.
      ]. Google AI researchers developed an end-to-end DL algorithm that detected and predicted malignancy in 8000 cases, outperforming non-thoracic radiologists [
      • Ardila D.
      • et al.
      End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography.
      ]. However, limitations to the study included use of overlapping data sets to train and test the model. In addition, the model did not train based on development of cancer, but instead compared to Lung-RADs assessment. Trojanovski, et al. utilized a two-stage framework to detect and assess malignancy risk of pulmonary nodules with comparable performance to a six-radiologist panel [
      • Trajanovski S.
      • et al.
      Towards radiologist-level cancer risk assessment in CT lung screening using deep learning.
      ]. A recent study by Ziegelmayer, et al. suggested that the addition of AI support can potentially improve the cost-effectiveness of lung cancer screening [
      • Ziegelmayer S.
      • et al.
      Cost-effectiveness of artificial intelligence support in computed tomography-based lung cancer screening.
      ], which is particularly important in the setting of the updated lung cancer screening recommendations released by the US Preventive Services Task Force (USPSTF) in 2021, expanding eligibility by lowering the screening age from 55 to 50 years and smoking history from 30 to 20 pack-years. In another study of NLST and Pan-Canadian Early Detection of Lung Cancer, Stephen Lam’s team developed a deep learning model that can accurately predict the risk of suspicious lung nodules growing to lung cancer within a 3-year period based on the radiologist’s CT report and clinical information [
      • Huang P.
      • et al.
      Prediction of lung cancer risk at follow-up screening with low-dose CT: a training and validation study of a deep learning method.
      ].

      2.5 Treatment planning

      Diagnostic imaging has traditionally played a central role in cancer staging by defining the extent of disease of the primary tumor and identifying local and distant metastases to determine the best treatment plan. Anatomic imaging can assist surgeons in surgical planning and allow radiation oncologists to define radiation fields. Some of the associated image processing tasks can be time consuming, tedious and prone to error. AI has the potential to help with some of these tasks and even support the referring providers use of imaging in pretreatment planning. In addition, some imaging assessments remain objective with variability in reads related to cancer detection, and AI can help to address this variability and elevate the level of care in under resourced areas where subspecialized radiologists may not be available.
      In renal tumors, 3D post-processing software utilized to accurately evaluate the anatomy and lesion volumes can be time consuming. CNN can be used to streamline this process and improve accuracy of determining volume of kidneys and renal tumors, important parameters surgeons use to evaluate patients for nephron-sparing interventions [
      • Houshyar R.
      • et al.
      Outcomes of artificial intelligence volumetric assessment of kidneys and renal tumors for preoperative assessment of nephron-sparing interventions.
      ].
      Detection of brain metastases for radiation planning can be a tedious task, particularly now that stereotactic radiosurgery of individual metastases has become the preferred approach over whole brain radiation [
      • Niranjan A.
      • et al.
      Guidelines for multiple brain metastases radiosurgery.
      ]. Furthermore, patients are more frequently undergoing multiple courses of stereotactic radiosurgery which adds to the complexity of reading follow-up studies with the need to differentiate between new lesions and treated lesions [
      • Lee W.J.
      • et al.
      Clinical outcomes of patients with multiple courses of radiosurgery for brain metastases from non-small cell lung cancer.
      ]. Similar to algorithms which have been used to detect pulmonary nodules, CNN has been used to detect brain metastases [
      • Zhou Z.
      • et al.
      Computer-aided detection of brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors.
      ,
      • Zhou Z.
      • et al.
      MetNet: computer-aided segmentation of brain metastases in post-contrast T1-weighted magnetic resonance imaging.
      ]. Although currently not adapted to clinical radiology workflows, CNN has the potential to facilitate the detection and tracking of brain metastases in the future. In addition, the CNN architecture can facilitate auto-segmentation for radiation oncologists. As in many other body regions, various deep learning algorithms have already achieved human-level performance in segmenting the organ or cancerous lesions, which may ultimately aid with focal therapies. This can dramatically reduce the time a radiation oncologist spends in manually contouring a patient study, increase contour consistency and improve accuracy [
      • Asbach J.C.
      • et al.
      Deep learning tools for the cancer clinic: an open-source framework with head and neck contour validation.
      ].
      In the past decade, MRI has gained a crucial role in the diagnosis and management of men with suspected prostate cancer owing to improved cancer detection. However, the interpretation of MRI remains highly dependent on the expertise of the radiologist. As a result, there is significant variation in the accuracy of cancer detection across different institutions. The use of artificial intelligence in prostate MRI imaging may help to improve the accuracy of cancer detection, especially in a low-volume/community practice setting and reduce inter-observer variability. So far, AI has been shown to have potential in particular for challenging transition zone lesions [
      • Mehralivand S.
      • et al.
      Multicenter multireader evaluation of an artificial intelligence-based attention mapping system for the detection of prostate cancer with multiparametric MRI.
      ]. Moreover, imaging-derived biomarkers are increasingly being recognized as complementary markers to histopathology for risk stratification, for example extra prostatic extension on MRI [
      • Wibmer A.G.
      • et al.
      Local extent of prostate cancer at MRI versus prostatectomy histopathology: associations with long-term oncologic outcomes.
      ] or PSMA expression on PET [
      • Papp L.
      • et al.
      Supervised machine learning enables non-invasive lesion characterization in primary prostate cancer with [68Ga] Ga-PSMA-11 PET/MRI.
      ].
      Beyond the traditional tasks of radiologists, radiomic features coupled with AI can extract and analyze quantitative data to provide tumor characterizations to guide patient treatment, including predicting histologic and molecular subtypes. For example, the ML radiomics model has shown promise in differentiating small cell lung cancer from other lung lesions on CT for pulmonary nodules at least 1 cm in size [
      • Shah R.P.
      • et al.
      Machine learning radiomics model for early identification of small-cell lung cancer on computed tomography scans.
      ]. Recently, Ma et al. used ML to differentiate between breast cancer molecular subtypes based on mammography and ultrasound. Both clinical data and imaging signs based on the BI-RADS lexicon served as inputs for the ML models [
      • Ma M.
      • et al.
      Predicting the molecular subtype of breast cancer and identifying interpretable imaging features using machine learning algorithms.
      ]. In another breast cancer study, perfusion MRI radiomics were used to infer tumor infiltrating lymphocytes [
      • Wu J.
      • et al.
      Magnetic resonance imaging and molecular features associated with tumor-infiltrating lymphocytes in breast cancer.
      ]. Though genomic biomarkers are commonly used in oncology to determine best treatment pathways, there are limitations because of the need for invasive biopsy and some biopsies may not be technically feasible. Moreover, large data sets with tissue diagnosis as the gold standard can be a challenge in developing biomarkers linked directly to known molecular biomarkers. Transfer learning methods (TLM) can be used to overcome limited data sets. Liu et al. used deep CNN and transfer learning methods to predict Ki-67 status, a biomarker to determine use of neoadjuvant chemotherapy in patients with breast cancer, using multiparametric MRI [
      • Liu W.
      • et al.
      Preoperative prediction of Ki-67 status in breast cancer with multiparametric MRI using transfer learning.
      ]. Additionally, core biopsy specimens may not be representative of the intra-tumoral heterogeneity. Thus, integrating imaging and molecular biomarkers can potentially offer a comprehensive and deep profiling of individual cancer patients [
      • Wu J.
      • Mayer A.T.
      • Li R.
      Integrated imaging and molecular analysis to decipher tumor microenvironment in the era of immunotherapy. in.
      ].
      In addition to predicting tumor molecular features, AI tools have shown promise in detecting imaging features that may correspond to prognosis and predict response to various treatments [
      • Bera K.
      • et al.
      Predicting cancer outcomes with radiomics and artificial intelligence in radiology.
      ]. These features in the future may aid in complex clinical decision making as an imaging biomarker. A prognostic biomarker in oncology is one that can help to identify likelihood of disease recurrence or progression. In breast cancer, multiparametric breast MRI paired with ML has been used to generate a predictive model for prognostic factors in breast cancer [
      • Lee J.Y.
      • et al.
      Radiomic machine learning for predicting prognostic biomarkers and molecular subtypes of breast cancer using tumor heterogeneity and angiogenesis properties on MRI.
      ]. Studies have shown that MRI features have additive prognostic value in predicting clinical outcomes in brain tumors such as gliomas [
      • Bae S.
      • et al.
      Radiomic MRI phenotyping of glioblastoma: improving survival prediction.
      ]. Integrated MRI and histopathologic images can have higher accuracy in predicting OS compared with ML models with MR or histopathologic images alone [
      • Rathore S.
      • et al.
      Combining MRI and histologic imaging features for predicting overall survival in patients with glioma.
      ]. In head and neck cancers, prediction of disease progression is helpful in intensifying therapeutic strategies to high-risk individuals. Patients who are at low risk for progression based on radiomics model may benefit from de-intensifying therapies thereby leading to reduction in morbidity, such as radiation-induced injury to surrounding tissues [
      • Vallieres M.
      • et al.
      Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer.
      ,
      • Wu J.
      • et al.
      Tumor subregion evolution-based imaging features to assess early response and predict prognosis in oropharyngeal cancer.
      ].
      Also, biomarkers have been identified to predict response to specific therapies. In lung cancer patients, ML can predict epidermal growth factor (EGFR) and Kirsten rat sarcoma viral oncogene homologue (KRAS) mutations in non-small cell lung cancer. Cancers with EGFR show higher sensitivity to gefitinib and erlotinib, whereas those with KRAS mutations are prone to drug resistance [
      • Le N.Q.K.
      • et al.
      Machine learning-based radiomics signatures for EGFR and KRAS mutations prediction in non-small-cell lung cancer.
      ]. Also, ML algorithms assessing brain metastases can predict local failure of stereotactic radiation [
      • Karami E.
      • et al.
      Quantitative MRI biomarkers of stereotactic radiotherapy outcome in brain metastasis.
      ]. Imaging biomarkers have the potential to predict response to neoadjuvant chemotherapy in breast cancer patients [
      • Wu J.
      • et al.
      Intratumoral spatial heterogeneity at perfusion MR imaging predicts recurrence-free survival in locally advanced breast cancer treated with neoadjuvant chemotherapy.
      ]. A recent radiomic study investigated the predictive value of lesion-wise PET radiomics model in lymphoma patients who were given ibrutinib, outperforming conventional PET metrics (eg, SUVmax, MTV, TLG) for response prediction [
      • Jimenez J.E.
      • et al.
      Lesion-based radiomics signature in pretherapy 18F-FDG PET predicts treatment response to ibrutinib in lymphoma.
      ]. Predictive biomarkers in head and neck cancers remain limited. ML-derived biomarkers developed using MRI features have shown good performance in predicting disease progression in nasopharyngeal carcinoma and HPV- associated squamous cell carcinoma [
      • Wu J.
      • et al.
      Tumor subregion evolution-based imaging features to assess early response and predict prognosis in oropharyngeal cancer.
      ].
      Immunotherapy is a breakthrough in cancer treatment. However, only a subset of patients receives clinical benefit. Immunotherapy is expensive with potential for toxicity, and stratifying patients prior to therapy may be beneficial. The expression of the programmed cell prorotein-1 ligand (PDL-1) is an important predictive biomarker to determine which patients such receive immunotherapy, however this has not been sufficient or specific enough [
      • Makuku R.
      • et al.
      Current and future perspectives of PD-1/PDL-1 blockade in cancer immunotherapy.
      ], and requires invasive biopsies. AI may be able aid in the development of imaging biomarkers that may be more specific and less invasive. Radiomics based on extraction of CT features has shown to predict response to immunotherapy in solid tumors including non-small cell lung cancer and melanoma [
      • Sun R.
      • et al.
      A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study.
      ]. The development of dual-energy CT (DECT), which has shown to outperform single-energy CT (SECT) in improving visualization of biological processes, expands the potential for extracting radiomic features in oncologic imaging. ML using DECT-specific parameters has been able to predict response in patients with metastatic melanoma prior to initiation of immunotherapy [
      • Brendlin A.S.
      • et al.
      A machine learning model trained on dual-energy CT radiomics significantly improves immunotherapy response prediction for patients with stage IV melanoma.
      ].
      More broadly, Wu et al. defined radiological phenotypes based on tumor morphology and spatial heterogeneity, across different imaging modalities such as CT and MRI and cancer types including lung, breast and brain malignancies [
      • Wu J.
      • et al.
      Radiological tumour classification across imaging modality and histology.
      ]. This work provides proof of principle for pan-cancer radiomics classification scheme, defining imaging biomarkers that span across tumor types and imaging modalities to determine prognosis and predict response to immunotherapies [
      • Wu J.
      • et al.
      Radiological tumour classification across imaging modality and histology.
      ].

      2.6 Tumor response

      Tumor response assessment based on standardized measurements utilizing criteria such as Response Evaluation Criteria in Solid Tumors (RECIST) and Response Assessment in Neuro-Oncology (RANO) have become defined endpoints for many clinical trials. These criteria are also often the basis of clinical decisions. Tumor measurements can be laborious and prone to intra- and inter-reader variability [
      • Provenzale J.M.
      • Ison C.
      • Delong D.
      Bidimensional measurements in brain tumors: assessment of interobserver variability.
      ,
      • Provenzale J.M.
      • Mancini M.C.
      Assessment of intra-observer variability in measurement of high-grade brain tumors.
      ]. In addition, linear measurements of tumor size do not always capture true extent of disease and changes in size due to various morphologies of tumors. Volumetric measurements of tumor would be ideal, but manual segmentation can be time consuming and prone to error. AI can facilitate the assessment of volumetric changes in tumor.
      In pleural tumors of the lung, change in size of tumor can be challenging. Deep learning CNN has shown promise in segmenting and calculating tumor volume in malignant pleural mesothelioma with the ability to accurately segment tumor volumes as low as ∼100 cm3 [
      • Kidd A.C.
      • et al.
      Fully automated volumetric measurement of malignant pleural mesothelioma by deep learning AI: Validation and comparison with modified RECIST response criteria.
      ]. Similarly, in the brain, assessment of tumor size can be challenging utilizing bi-dimensional measurements as used in RANO. Segmentation of gliomas can be complex with multiple components including the enhancing tumor, necrotic core and surrounding edema. Much work has been done on the 3D segmentation of gliomas for automated disease burden quantification. [
      • Lotan E.
      • et al.
      State of the art: machine learning applications in glioma imaging.
      ]. The Brain Tumor Segmentation Challenge (BRaTS), which has pooled together training data sets from 19 institutions, has enabled the development of numerous DL algorithms. While many segmentation algorithms have been developed, their integration into clinical workflow remains a challenge. More recently, Lotan et al. developed a DL based clinical workflow for segmentation of gliomas preoperatively and postoperatively [
      • Lotan E.
      • et al.
      Development and practical implementation of a deep learning–based pipeline for automated pre-and postoperative glioma segmentation.
      ]. However, there has not been widespread clinical adoption of these AI algorithms.
      While volumetric assessment of tumors has been the basis for assessing tumor response, it has been recognized that functional and molecular imaging features may be more precise markers of tumor response [
      • Nishino M.
      ]. For example, changes in CT radiomics features in tumor and lymph nodes along with clinical variables have been associated with early radiation response in head and neck cancers [
      • Wu J.
      • et al.
      Integrating tumor and nodal imaging characteristics at baseline and mid-treatment computed tomography scans to predict distant metastasis in oropharyngeal cancer treated with concurrent chemoradiotherapy.
      ] and lung cancers [
      • Zhang N.
      • et al.
      Early response evaluation using primary tumor and nodal imaging features to predict progression-free survival of locally advanced non-small cell lung cancer.
      ].
      As inspired by multiregional gene sequencing study, habitat imaging [
      • Wu J.
      • Mayer A.T.
      • Li R.
      Integrated imaging and molecular analysis to decipher tumor microenvironment in the era of immunotherapy. in.
      ,
      • Gillies R.J.
      • Kinahan P.E.
      • Hricak H.
      Radiomics: images are more than pictures, they are data.
      ] has also been proposed by explicitly segmenting whole tumors in intrinsic subregions of similar radiographic patterns, which has shown to be an independent prognostic risk factor beyond conventional risk predictors for breast cancer patients after neoadjuvant chemotherapy [
      • Wu J.
      • et al.
      Intratumoral spatial heterogeneity at perfusion MR imaging predicts recurrence-free survival in locally advanced breast cancer treated with neoadjuvant chemotherapy.
      ] (Fig. 2). Habitat imaging analysis has also shown novel spatiotemporal response patterns induced by radiotherapy from the primary tumor and nodal regions at baseline and mid-treatment PET/CT scans in head and neck cancers [
      • Wu J.
      • et al.
      Tumor subregion evolution-based imaging features to assess early response and predict prognosis in oropharyngeal cancer.
      ]. Further, this quantitative image analysis can help refine delivery of radiation therapy to different parts of the tumor. Habitat imaging, combining multiparametric MRI, or PET/CT to establish quantitative imaging signatures of tumors can help to understand the spatial distribution of tumor subregions. This could allow for targeting aggressive disease with radiation boost to improve local control and mortality.
      Fig. 2
      Fig. 2AI has the potential to address unmet gaps in personalized cancer therapy. ML radiomics can extract and analyze quantitative data to provide tumor characterizations that can help guide therapy. Habitat imaging is a proposed ML analysis that explicitly segments whole tumors in intrinsic subregions of similar radiographic patterns to help refine delivery of radiation therapy to different parts of tumors.

      2.7 Current limitations and future directions

      Despite a growing body of evidence supporting the future utility of AI in oncologic imaging and a growing number of oncologic related FDA approved AI algorithms (Fig. 3), clinical applications and adoption have been limited [
      • Allen B.
      • et al.
      ACR data science institute artificial intelligence survey.
      ,

      Dreyer, K. ACR Data Science Institute AI Central. 2022 [cited 2022 9/24/22]; ACR Data Science Institute AI Central]. Available from: https://aicentral.acrdsi.org/.

      ]. Access to a diverse and large amount of data remains a critical bottleneck in the creation of robust and clinically useful models. Privacy concerns and regulatory barriers are a major hindrance in the creation of such large datasets. However, this obstacle may be overcome by using local resources to train parts of a centralized model, without the training data actually leaving the local hospital infrastructure using federated learning, which is used in a multinational collaboration to develop CT deep learning model for COVID-19 diagnosis [
      • Bai X.
      • et al.
      Advancing COVID-19 diagnosis with privacy-preserving collaboration in artificial intelligence.
      ]. Nevertheless, many challenges remain in the curation of these multi-institutional datasets, such as standardized inclusion of patient demographics, cancer type, staging, molecular features, as well as standardized imaging acquisitions.
      Fig. 3
      Fig. 3FDA-Approved AI software related to oncologic imaging from the ACR Data Science Institute Database.
      Aside from the need for rigorous clinical validation linked to patient outcomes, integration with current health IT systems and radiology workflow will also be a challenge. Radiology organizations such as the American College of Radiology (ACR) and Radiologic Society of North America (RSNA) have organized demonstrations to engage with relevant stakeholders and develop standards to facilitate this [
      • Wiggins W.F.
      • et al.
      Imaging AI in practice: a demonstration of future workflow using integration standards.
      ]. With ever increasing imaging volumes and workforce shortages, AI that increases the work time in a radiologist’s day will not be adopted. A healthcare system or institution’s ability to integrate new AI software into their IT system may also be constrained based on time and resources available, as well as general challenges of interoperability of production systems in medical informatics.
      Interpretability of AI is another emerging area of focus in order to improve transparency and move away from the “black box” models [
      • Linardatos P.
      • Papastefanopoulos V.
      • Kotsiantis S.
      Explainable AI: a review of machine learning interpretability methods.
      ]. AI algorithms will become more acceptable and adopted if clinical rationale can support the prediction of AI. Interpretability is also important in order to predict when AI systems might fail. Regulatory bodies have recognized the importance of accountability and recently debated in the EU General Data Protection Regulation [
      • Doshi-Velez F.
      • et al.
      Accountability of AI under the law: the role of explanation.
      ]. In fact, the EU Commission proposed the legal framework for AI, the AI Act, to “promote the uptake of AI and develop an ecosystem of trust” [
      • Vokinger K.N.
      • Gasser U.
      Regulating AI in medicine in the United States and Europe.
      ].
      Finally, the business case for AI will be important to drive adoption. The initial use cases in clinical practice have largely centered on increased efficiency and the potential clinical benefit without well-defined measurable outcomes. For example, DL algorithms that can reduce acquisition time of MRI translates to direct cost savings in scanner and medical staff time, while also improving overall patient experience. More recently, direct reimbursement for AI in the United States has been given to a limited number of AI applications that have either demonstrated significant clinical improvement with the potential to decrease overall healthcare costs or democratized access to care through improving access of a particular service [
      • Chen M.M.
      • Golding L.P.
      • Nicola G.N.
      Who will pay for AI.
      ]. As we move progressively towards value-based payment models, a similar pathway will likely follow. Oncologic imaging AI algorithms that decrease the overall costs and increase access to screening tools such as mammography or low-dose chest CT may be of value. Imaging biomarkers that reduce overall cost of care by optimizing treatment algorithms leading to improved patient outcomes will also be valued.

      3. Conclusion

      AI is augmenting the role of radiologists and radiology in modern oncology, as data science becomes increasingly incorporated into clinical imaging, further enhancing imaging derived insights. The power of AI lies in its ability to reveal previously unknown or even counterintuitive information patterns that might have been overlooked/ missed or hard to perceive by radiologist. Besides imaging, ML is broadly applied to multimodal medical data, including medical text reports, biospecimen-based assays, and monitoring signals. An integrated analysis across different platforms that offers a comprehensive and dynamic view of heterogeneous cancer will further improve the AI model performance. However, the existing ML studies mainly focused on tackling specific, well-defined clinical applications with structured input data and simplified clinical endpoints, known as narrow AI. Given the complex nature of oncology, generalizable AI algorithms built on multimodal real-world data will become the next-generation approach to make real clinical impact. However, it is worth mentioning that the deployment of these technological advancements is complex and may take decades. More importantly, radiologists can and will play a leadership role by directing ongoing AI research efforts to address the most pressing clinical challenges rather than comparing AI systems versus human experts. These new algorithms will be welcomed and adopted to benefit cancer patient management.

      CRediT authorship contribution statement

      Melissa Chen: Conceptualization, Writing – original draft, Admir Terzic: Writing – original draft, Anton Becker: Writing – review and editing, Jason Johnson: Writing – review and editing, Carol Wu: Writing – review and editing, Christoph Wald: Writing – review and editing, Jia Wu: Conceptualization, Writing – original draft, Writing – review and editing.

      Appendix A. Supplementary material

      References

        • Gillies R.J.
        • Kinahan P.E.
        • Hricak H.
        Radiomics: images are more than pictures, they are data.
        Radiology. 2016; 278: 563-577
        • Nagayama Y.
        • et al.
        Deep learning–based reconstruction for lower-dose pediatric CT: technical principles, image characteristics, and clinical implementations.
        RadioGraphics. 2021; 41: 1936-1953
        • Singh R.
        • et al.
        Artificial intelligence in image reconstruction: the change is here.
        Phys. Med. 2020; 79: 113-125
        • Shen L.
        • Zhao W.
        • Xing L.
        Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning.
        Nat. Biomed. Eng. 2019; 3: 880-888
        • Summers P.
        • et al.
        Whole-body magnetic resonance imaging: technique, guidelines and key applications.
        Ecancermedicalscience. 2021; : 15
        • Akçakaya M.
        • et al.
        Scan‐specific robust artificial‐neural‐networks for k‐space interpolation (RAKI) reconstruction: database‐free deep learning for fast imaging.
        Magn. Reson. Med. 2019; 81: 439-453
        • Johnson P.M.
        • Recht M.P.
        • Knoll F.
        Improving the speed of MRI with artificial intelligence.
        Seminars in musculoskeletal radiology. Thieme Medical Publishers.,, 2020
        • Arshad M.
        • et al.
        Transfer learning in deep neural network based under-sampled MR image reconstruction.
        Magn. Reson. Imaging. 2021; 76: 96-107
        • Faucon A.-L.
        • Bobrie G.
        • Clément O.
        Nephrotoxicity of iodinated contrast media: from pathophysiology to prevention strategies.
        Eur. J. Radiol. 2019; 116: 231-241
        • Dillman J.R.
        • Davenport M.S.
        Gadolinium retention — 5 years later….
        Pediatr. Radiol. 2020; 50: 166-167
        • Haubold J.
        • et al.
        Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network.
        Eur. Radiol. 2021; 31: 6087-6095
        • Gong E.
        • et al.
        Deep learning enables reduced gadolinium dose for contrast‐enhanced brain MRI.
        J. Magn. Reson. Imaging. 2018; 48: 330-340
        • Chaudhari A.S.
        • et al.
        Low-count whole-body PET with deep learning in a multicenter and externally validated study.
        NPJ Digit. Med. 2021; 4: 1-11
        • Passiglia F.
        • et al.
        Benefits and harms of lung cancer screening by chest computed tomography: a systematic review and meta-analysis.
        J. Clin. Oncol. 2021; 39: 2574-2585
        • Welch H.G.
        • et al.
        Breast-cancer tumor size, overdiagnosis, and mammography screening effectiveness.
        N. Engl. J. Med. 2016; 375: 1438-1447
        • Rodriguez-Ruiz A.
        • et al.
        Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists.
        JNCI: J. Natl. Cancer Inst. 2019; 111: 916-922
        • McKinney S.M.
        • et al.
        International evaluation of an AI system for breast cancer screening.
        Nature. 2020; 577: 89-94
        • Larsen M.
        • et al.
        Artificial intelligence evaluation of 122 969 mammography examinations from a population-based screening program.
        Radiology. 2022; 212381
        • Shoshan Y.
        • et al.
        Artificial intelligence for reducing workload in breast cancer screening with digital breast tomosynthesis.
        Radiology. 2022; 211105
        • Parikh J.R.
        • Sun J.
        • Mainiero M.B.
        Prevalence of burnout in breast imaging radiologists.
        J. Breast Imaging. 2020; 2: 112-118
        • Yala A.
        • et al.
        Optimizing risk-based breast cancer screening policies with reinforcement learning.
        Nat. Med. 2022; : 1-8
        • Schreuder A.
        • et al.
        Artificial intelligence for detection and characterization of pulmonary nodules in lung cancer CT screening: ready for practice?.
        Transl. Lung Cancer Res. 2021; 10: 2378
        • Ardila D.
        • et al.
        End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography.
        Nat. Med. 2019; 25: 954-961
        • Trajanovski S.
        • et al.
        Towards radiologist-level cancer risk assessment in CT lung screening using deep learning.
        Comput. Med. Imaging Graph. 2021; 90101883
        • Ziegelmayer S.
        • et al.
        Cost-effectiveness of artificial intelligence support in computed tomography-based lung cancer screening.
        Cancers. 2022; 14: 1729
        • Huang P.
        • et al.
        Prediction of lung cancer risk at follow-up screening with low-dose CT: a training and validation study of a deep learning method.
        Lancet Digit. Health. 2019; 1: e353-e362
        • Houshyar R.
        • et al.
        Outcomes of artificial intelligence volumetric assessment of kidneys and renal tumors for preoperative assessment of nephron-sparing interventions.
        J. Endourol. 2021; 35: 1411-1418
        • Niranjan A.
        • et al.
        Guidelines for multiple brain metastases radiosurgery.
        Prog. Neurol. Surg. 2019; 34: 100-109
        • Lee W.J.
        • et al.
        Clinical outcomes of patients with multiple courses of radiosurgery for brain metastases from non-small cell lung cancer.
        Sci. Rep. 2022; 12: 10712
        • Zhou Z.
        • et al.
        Computer-aided detection of brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors.
        Radiology. 2020; 295: 407-415
        • Zhou Z.
        • et al.
        MetNet: computer-aided segmentation of brain metastases in post-contrast T1-weighted magnetic resonance imaging.
        Radiother. Oncol. 2020; 153: 189-196
        • Asbach J.C.
        • et al.
        Deep learning tools for the cancer clinic: an open-source framework with head and neck contour validation.
        Radiat. Oncol. 2022; 17: 1-13
        • Mehralivand S.
        • et al.
        Multicenter multireader evaluation of an artificial intelligence-based attention mapping system for the detection of prostate cancer with multiparametric MRI.
        Am. J. Roentgenol. 2020; 215: 903-912
        • Wibmer A.G.
        • et al.
        Local extent of prostate cancer at MRI versus prostatectomy histopathology: associations with long-term oncologic outcomes.
        Radiology. 2022; 302: 595-602
        • Papp L.
        • et al.
        Supervised machine learning enables non-invasive lesion characterization in primary prostate cancer with [68Ga] Ga-PSMA-11 PET/MRI.
        Eur. J. Nucl. Med. Mol. Imaging. 2021; 48: 1795-1805
        • Shah R.P.
        • et al.
        Machine learning radiomics model for early identification of small-cell lung cancer on computed tomography scans.
        JCO Clin. Cancer Inform. 2021; 5: 746-757
        • Ma M.
        • et al.
        Predicting the molecular subtype of breast cancer and identifying interpretable imaging features using machine learning algorithms.
        Eur. Radiol. 2022; 32: 1652-1662
        • Wu J.
        • et al.
        Magnetic resonance imaging and molecular features associated with tumor-infiltrating lymphocytes in breast cancer.
        Breast Cancer Res. 2018; 20: 1-15
        • Liu W.
        • et al.
        Preoperative prediction of Ki-67 status in breast cancer with multiparametric MRI using transfer learning.
        Acad. Radiol. 2021; 28: e44-e53
        • Wu J.
        • Mayer A.T.
        • Li R.
        Integrated imaging and molecular analysis to decipher tumor microenvironment in the era of immunotherapy. in.
        Seminars in Cancer Biology. Elsevier,, 2020
        • Bera K.
        • et al.
        Predicting cancer outcomes with radiomics and artificial intelligence in radiology.
        Nature Rev. Clin. Oncol. 2021; : 1-15
        • Lee J.Y.
        • et al.
        Radiomic machine learning for predicting prognostic biomarkers and molecular subtypes of breast cancer using tumor heterogeneity and angiogenesis properties on MRI.
        Eur. Radiol. 2022; 32: 650-660
        • Bae S.
        • et al.
        Radiomic MRI phenotyping of glioblastoma: improving survival prediction.
        Radiology. 2018; 289: 797-806
        • Rathore S.
        • et al.
        Combining MRI and histologic imaging features for predicting overall survival in patients with glioma.
        Radiol.: Imaging Cancer. 2021; 3e200108
        • Vallieres M.
        • et al.
        Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer.
        Sci. Rep. 2017; 7: 1-14
        • Wu J.
        • et al.
        Tumor subregion evolution-based imaging features to assess early response and predict prognosis in oropharyngeal cancer.
        J. Nucl. Med. 2020; 61: 327-336
        • Le N.Q.K.
        • et al.
        Machine learning-based radiomics signatures for EGFR and KRAS mutations prediction in non-small-cell lung cancer.
        Int. J. Mol. Sci. 2021; 22: 9254
        • Karami E.
        • et al.
        Quantitative MRI biomarkers of stereotactic radiotherapy outcome in brain metastasis.
        Sci. Rep. 2019; 9: 1-11
        • Wu J.
        • et al.
        Intratumoral spatial heterogeneity at perfusion MR imaging predicts recurrence-free survival in locally advanced breast cancer treated with neoadjuvant chemotherapy.
        Radiology. 2018; 288: 26-35
        • Jimenez J.E.
        • et al.
        Lesion-based radiomics signature in pretherapy 18F-FDG PET predicts treatment response to ibrutinib in lymphoma.
        Clin. Nucl. Med. 2022; 47: 209-218
        • Makuku R.
        • et al.
        Current and future perspectives of PD-1/PDL-1 blockade in cancer immunotherapy.
        J. Immunol. Res. 2021; 20216661406
        • Sun R.
        • et al.
        A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study.
        Lancet Oncol. 2018; 19: 1180-1191
        • Brendlin A.S.
        • et al.
        A machine learning model trained on dual-energy CT radiomics significantly improves immunotherapy response prediction for patients with stage IV melanoma.
        J. Immunother. Cancer. 2021; 9: 11
        • Wu J.
        • et al.
        Radiological tumour classification across imaging modality and histology.
        Nat. Mach. Intell. 2021; 3: 787-798
        • Provenzale J.M.
        • Ison C.
        • Delong D.
        Bidimensional measurements in brain tumors: assessment of interobserver variability.
        AJR Am. J. Roentgenol. 2009; 193: W515-W522
        • Provenzale J.M.
        • Mancini M.C.
        Assessment of intra-observer variability in measurement of high-grade brain tumors.
        J. Neurooncol. 2012; 108: 477-483
        • Kidd A.C.
        • et al.
        Fully automated volumetric measurement of malignant pleural mesothelioma by deep learning AI: Validation and comparison with modified RECIST response criteria.
        Thorax. 2022;
        • Lotan E.
        • et al.
        State of the art: machine learning applications in glioma imaging.
        Am. J. Roentgenol. 2019; 212: 26-37
        • Lotan E.
        • et al.
        Development and practical implementation of a deep learning–based pipeline for automated pre-and postoperative glioma segmentation.
        Am. J. Neuroradiol. 2022; 43: 24-32
        • Nishino M.
        Tumor Response Assess. Precis. Cancer Ther.: Response Eval. Criteria Solid Tumors Beyond Am. Soc. Clin. Oncol. Educ. Book. 2018; 38: 1019-1029
        • Wu J.
        • et al.
        Integrating tumor and nodal imaging characteristics at baseline and mid-treatment computed tomography scans to predict distant metastasis in oropharyngeal cancer treated with concurrent chemoradiotherapy.
        Int. J. Radiat. Oncol. * Biol. * Phys. 2019; 104: 942-952
        • Zhang N.
        • et al.
        Early response evaluation using primary tumor and nodal imaging features to predict progression-free survival of locally advanced non-small cell lung cancer.
        Theranostics. 2020; 10: 11707
        • Gillies R.J.
        • Kinahan P.E.
        • Hricak H.
        Radiomics: images are more than pictures, they are data.
        Radiology. 2016; 278: 563-577
        • Allen B.
        • et al.
        ACR data science institute artificial intelligence survey.
        J. Am. Coll. Radio., 2021. 2020; 18: 1153-1159
      1. Dreyer, K. ACR Data Science Institute AI Central. 2022 [cited 2022 9/24/22]; ACR Data Science Institute AI Central]. Available from: https://aicentral.acrdsi.org/.

        • Bai X.
        • et al.
        Advancing COVID-19 diagnosis with privacy-preserving collaboration in artificial intelligence.
        Nat. Mach. Intell. 2021; 3: 1081-1089
        • Wiggins W.F.
        • et al.
        Imaging AI in practice: a demonstration of future workflow using integration standards.
        Radiol.: Artif. Intell. 2021; 3e210152
        • Linardatos P.
        • Papastefanopoulos V.
        • Kotsiantis S.
        Explainable AI: a review of machine learning interpretability methods.
        Entropy. 2020; 23: 18
        • Doshi-Velez F.
        • et al.
        Accountability of AI under the law: the role of explanation.
        arXiv Prepr. arXiv. 2017; (1711.01134) (1711.01134)
        • Vokinger K.N.
        • Gasser U.
        Regulating AI in medicine in the United States and Europe.
        Nat. Mach. Intell. 2021; 3: 738-739
        • Chen M.M.
        • Golding L.P.
        • Nicola G.N.
        Who will pay for AI.
        Radiol.: Artif. Intell. 2021; 3e210030