Journal articles on the topic 'AI Observer'

To see the other types of publications on this topic, follow the link: AI Observer.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'AI Observer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Straeubig, Michael. "Games, AI and Systems." Eludamos: Journal for Computer Game Culture 10, no. 1 (April 21, 2020): 141–60. http://dx.doi.org/10.7557/23.6176.

Full text
Abstract:
In recent years, we have observed impressive advancements at the intersection of games and artificial intelligence. Often these developments are described in terms of technological progress, while public discourses on their cultural, social and political impact are largely decoupled. I present an alternative rhetoric by speculating about the emergence of AI within social systems. In a radical departure from the dominant discourse, I describe seven roles - Mechanic, Alter/Ego, Observer, Protector, Player, Creator and God - that an AI may assume in the environment of videogames. I reflect on the ramifications of these roles for the idea of an artificial general intelligence (AGI), mainly hoping to irritate the prevailing discussion.
APA, Harvard, Vancouver, ISO, and other styles
2

Salinel, Brandon, Matthew Grudza, Sarah Zeien, Matthew Murphy, Jake Adkins, Corey Jensen, Curt Bay, et al. "Comparison of segmentation methods to improve throughput in annotating AI-observer for detecting colorectal cancer." Journal of Clinical Oncology 40, no. 4_suppl (February 1, 2022): 142. http://dx.doi.org/10.1200/jco.2022.40.4_suppl.142.

Full text
Abstract:
142 Background: Colorectal cancer (CRC) is the second leading cause of cancer-related deaths, and its outcome can be improved with better detection of incidental early CRC on routine CT of the abdomen and pelvis (CTAP). AI-second observer (AI) has the potential as shown in our companion abstract. The bottleneck in training AI is the time required for radiologists to segment the CRC. We compared two techniques for accelerating the segmentation process: 1) Sparse annotation (annotating some of the CT slice containing CRC instead of every slice); 2) Allowing AI to perform initial segmentation followed by human adjustment. Methods: 2D U-Net convolutional neural network (CNN) containing 31 million trainable parameters was trained with 58 CRC CT images from Banner MD Anderson (AZ) and MD Anderson Cancer Center (TX) (51 used for training and 7 for validation) and 59 normal CT scans from Banner MD Anderson Cancer Center. Twenty of the 25 CRC cases from public domain data (The Cancer Genome Atlas) were used to evaluate the performance of the models. The CRC was segmented using ITK-SNAP open-source software (v. 3.8). For the first objective, 3 separate models were trained (fully annotated CRC, every other slice, and every third slice). The AI-annotation on the TCGA dataset was analyzed by the percentage of correct detection of CRC, the number of false positives, and the Dice similarity coefficient (DSC). If parts of the CRC were flagged by AI, then it was considered correct. A detection was considered false positive if the marked lesion did not overlap with CRC; contiguous false positives across different slices of CT image were considered a single false positive. DSC measures the quality of the segmentation by measuring the overlap between the ground-truth and AI detected lesion. For the second objective, the time required to adjust the AI-produced annotation was compared to the time required for annotating the entire CRC without AI assistance. The AI-models were trained using ensemble learning (see our companion abstract for details of the techniques). Results: Our results showed that skipping slices of tumor in training did not alter the accuracy, false positives, or DSC classification of the model. When adjusting the AI-observer segmentation, there was a trend toward decreasing the time required to adjust the annotation compared to full manual segmentation, but the difference was not statistically significant (Table; p=0.121). Conclusions: Our results show that both skipping slices of tumor as well as starting with AI-produced annotation can potentially decrease the effort required to produce high-quality ground truth without compromising the performance of AI. These techniques can help improve the throughput to obtain a large volume of cases to train AI for detecting CRC.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Andrew Dennis, Brian C. Allen, Asser Abou Elkassem, Rafah Mresh, Seth T. Lirette, Yujan Shrestha, J. David Giese, et al. "Multi-institutional comparative effectiveness of advanced cancer longitudinal imaging response evaluation methods: Current practice versus artificial intelligence-assisted." Journal of Clinical Oncology 38, no. 15_suppl (May 20, 2020): 2010. http://dx.doi.org/10.1200/jco.2020.38.15_suppl.2010.

Full text
Abstract:
2010 Background: Current-practice methods to evaluate advanced cancer longitudinal tumor response include manual measurements on digital medical images and dictation of text-based reports that are prone to errors, inefficient, and associated with low inter-observer agreement. The purpose of this study is to compare the effectiveness of advanced cancer longitudinal imaging response evaluation using current practice versus artificial intelligence (AI)-assisted methods. Methods: For this multi-institutional longitudinal retrospective study, body CT images from 120 consecutive patients with multiple serial imaging exams and advanced cancer treated with systemic therapy were independently evaluated by 24 radiologists using current-practice versus AI-assisted methods. For the current practice method, radiologists dictated text-based reports and separately categorized response (CR, PR, SD, and PD). For the AI-assisted method, custom software included AI algorithms for tumor measurement, target and non-target location labelling, and tumor localization at follow up. The AI-assisted software automatically categorized tumor response per RECIST 1.1 calculations and displayed longitudinal data in the form of a graph, table, and key images. All studies were read independently in triplicate for assessment of inter-observer agreement. Comparative effectiveness metrics included: major errors, time of image interpretation, and inter-observer agreement for final response category. Results: Major errors were found in 27.5% (99/360) for current-practice versus 0.3% (1/360) for AI-assisted methods (p < 0.001), corresponding to a 99% reduction in major errors. Average time of interpretation by radiologists was 18.7 min for current-practice versus 9.8 min for AI-assisted method (p < 0.001), with the AI-assisted method being nearly twice as fast. Total inter-observer agreement on final response categorization for radiologists was 52% (62/120) for current-practice versus 75% (90/120) for AI-assisted method (p < 0.001), corresponding to a 45% increase in total inter-observer agreement. Conclusion: In a large multi-institutional study, AI-assisted advanced cancer longitudinal imaging response evaluation significantly reduced major errors, was nearly twice as fast, and increased inter-observer agreement relative to the current-practice method, thereby establishing a new and improved standard of care.
APA, Harvard, Vancouver, ISO, and other styles
4

Gürsoy Çoruh, Ayşegül, Bülent Yenigün, Çağlar Uzun, Yusuf Kahya, Emre Utkan Büyükceran, Atilla Elhan, Kaan Orhan, and Ayten Kayı Cangır. "A comparison of the fusion model of deep learning neural networks with human observation for lung nodule detection and classification." British Journal of Radiology 94, no. 1123 (July 1, 2021): 20210222. http://dx.doi.org/10.1259/bjr.20210222.

Full text
Abstract:
Objectives: To compare the diagnostic performance of a newly developed artificial intelligence (AI) algorithm derived from the fusion of convolution neural networks (CNN) versus human observers in the estimation of malignancy risk in pulmonary nodules. Methods: The study population consists of 158 nodules from 158 patients. All nodules (81 benign and 77 malignant) were determined to be malignant or benign by a radiologist based on pathologic assessment and/or follow-up imaging. Two radiologists and an AI platform analyzed the nodules based on the Lung-RADS classification. The two observers also noted the size, location, and morphologic features of the nodules. An intraclass correlation coefficient was calculated for both observers and the AI; ROC curve analysis was performed to determine diagnostic performances. Results: Nodule size, presence of spiculation, and presence of fat were significantly different between the malignant and benign nodules (p < 0.001, for all three). Eighteen (11.3%) nodules were not detected and analyzed by the AI. Observer 1, observer 2, and the AI had an AUC of 0.917 ± 0.023, 0.870 ± 0.033, and 0.790 ± 0.037 in the ROC analysis of malignity probability, respectively. The observers were in almost perfect agreement for localization, nodule size, and lung-RADS classification [κ (95% CI)=0.984 (0.961–1.000), 0.978 (0.970–0.984), and 0.924 (0.878–0.970), respectively]. Conclusion: The performance of the fusion AI algorithm in estimating the risk of malignancy was slightly lower than the performance of the observers. Fusion AI algorithms might be applied in an assisting role, especially for inexperienced radiologists. Advances in knowledge: In this study, we proposed a fusion model using four state-of-art object detectors for lung nodule detection and discrimination. The use of fusion of deep learning neural networks might be used in a supportive role for radiologists when interpreting lung nodule discrimination.
APA, Harvard, Vancouver, ISO, and other styles
5

M. Lazim, Izzuddin, Abdul Rashid Husain, Nurul Adilla Mohd Subha, and Mohd Ariffanan Mohd Basri. "Intelligent Observer-Based Feedback Linearization for Autonomous Quadrotor Control." International Journal of Engineering & Technology 7, no. 4.35 (November 30, 2018): 904. http://dx.doi.org/10.14419/ijet.v7i4.35.26280.

Full text
Abstract:
The presence of disturbances can cause instability to the quadrotor flight and can be dangerous especially when operating near obstacles or other aerial vehicles. In this paper, a hybrid controller called state feedback with intelligent disturbance observer-based control (SF-iDOBC) is developed for trajectory tracking of quadrotor in the presence of time-varying disturbances, e.g. wind. This is achieved by integrating artificial intelligence (AI) technique with disturbance observer-based feedback linearization to achieve a better disturbance rejection capability. Here, the observer estimates the disturbances acting on the quadrotor, while AI technique using the radial basis function neural network (RBFNN) compensates the disturbance estimation error. To improve the error compensation of RBFNN, the k-means clustering method is used to find the optimal centers of the Gaussian activation function. In addition, the weights of the RBFNN are tuned online using the derived adaptation law based on the Lyapunov method, which eliminates the offline training. In the simulation experiment conducted, a total of four input nodes and five hidden neurons are used to compensate for the error. The results obtained demonstrate the effectiveness and merits of the theoretical development.
APA, Harvard, Vancouver, ISO, and other styles
6

Salinel, Brandon, Matthew Grudza, Sarah Zeien, Matthew Murphy, Jake Adkins, Corey Jensen, Curt Bay, et al. "Ensemble voting decreases false positives in AI second-observer reads for detecting colorectal cancer." Journal of Clinical Oncology 40, no. 4_suppl (February 1, 2022): 141. http://dx.doi.org/10.1200/jco.2022.40.4_suppl.141.

Full text
Abstract:
141 Background: Colorectal cancer (CRC) is the second leading cause of cancer-related deaths, and survival can be improved if early, suspect imaging features on CT of the abdomen and pelvis (CTAP) can be routinely identified. At present, up to 40% of these features are undiagnosed on routine CTAP, but this can be improved with a second observer. In this study, we developed a deep ensemble learning method for detecting CRC on CTAP to determine if increasing agreement between ensemble models can decrease the false positives detected by artificial intelligence (AI) second-observer. Methods: 2D U-Net convolutional neural network (CNN) containing 31 million trainable parameters was trained with 58 CRC CT images from Banner MD Anderson (AZ) and MD Anderson Cancer Center (TX) (51 used for training and 7 for validation) and 59 normal CT scans from Banner MD Anderson Cancer Center. 20 of the 25 CRC cases from public domain data (The Cancer Genome Atlas) were used to evaluate the performance of the models. The CRC was segmented using ITK-SNAP open-source software (v. 3.8). To apply the deep ensemble approach, five CNN models were trained independently with random initialization using the same U-Net architect and the same training data. Given a testing CT scan, each of the five trained CNN models was applied to produce tumor segmentation for the testing CT scan. The tumor segmentation results produced by the trained CNN models were then fused using a simple majority voting rule to produce consensus tumor segmentation results. The segmentation was analyzed by the percentage of correct detection, the number of false positives per case, and the Dice similarity coefficient (DSC). If parts of the CRC were flagged by AI, then it was considered correct. A detection was considered false positive if the marked lesion did not overlap with any CRC; contiguous false positives across different slices of CT image were considered a single false positive. DSC measures the quality of the segmentation by measuring the overlap between the ground-truth and AI detected lesion. Results: Our results showed that increasing the agreement between the 5 models dramatically decreases the number of false positives per CT at the expense of slight decrease in accuracy and DSC. This is described in the table. Conclusions: Our results show that AI-based second observer can potentially detect CRC on routine CTAP. Although the initial result yields high false positives per case, ensemble voting is an effective method for decreasing the false positives with a slight decrease in accuracy. This technique can be further improved for eventual clinical application.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
7

Palm, Christiane, Catherine E. Connolly, Regina Masser, Barbara Padberg Sgier, Eva Karamitopoulou, Quentin Simon, Beata Bode, and Marianne Tinguely. "Determining HER2 Status by Artificial Intelligence: An Investigation of Primary, Metastatic, and HER2 Low Breast Tumors." Diagnostics 13, no. 1 (January 3, 2023): 168. http://dx.doi.org/10.3390/diagnostics13010168.

Full text
Abstract:
The expression of human epidermal growth factor receptor 2 (HER2) protein or gene transcripts is critical for therapeutic decision making in breast cancer. We examined the performance of a digitalized and artificial intelligence (AI)-assisted workflow for HER2 status determination in accordance with the American Society of Clinical Oncology (ASCO)/College of Pathologists (CAP) guidelines. Our preliminary cohort consisted of 495 primary breast carcinomas, and our study cohort included 67 primary breast carcinomas and 30 metastatic deposits, which were evaluated for HER2 status by immunohistochemistry (IHC) and in situ hybridization (ISH). Three practicing breast pathologists independently assessed and scored slides, building the ground truth. Following a washout period, pathologists were provided with the results of the AI digital image analysis (DIA) and asked to reassess the slides. Both rounds of assessment from the pathologists were compared to the AI results and ground truth for each slide. We observed an overall HER2 positivity rate of 15% in our study cohort. Moderate agreement (Cohen’s κ 0.59) was observed between the ground truth and AI on IHC, with most discrepancies occurring between 0 and 1+ scores. Inter-observer agreement amongst pathologists was substantial (Fleiss´ κ 0.77) and pathologists’ agreement with AI scores was 80.6%. Substantial agreement of the AI with the ground truth (Cohen´s κ 0.80) was detected on ISH-stained slides, and the accuracy of AI was similar for the primary and metastatic tumors. We demonstrated the feasibility of a combined HER2 IHC and ISH AI workflow, with a Cohen’s κ of 0.94 when assessed in accordance with the ASCO/CAP recommendations.
APA, Harvard, Vancouver, ISO, and other styles
8

Ajmera, Pranav, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, and Purnachandra Lamghare. "Observer performance evaluation of the feasibility of a deep learning model to detect cardiomegaly on chest radiographs." Acta Radiologica Open 11, no. 7 (July 2022): 205846012211073. http://dx.doi.org/10.1177/20584601221107345.

Full text
Abstract:
Background Cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR (>0.55) is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. Purpose We propose a deep learning (DL)-based model for automatic CTR calculation to assist radiologists with rapid diagnosis of cardiomegaly and thus optimise the radiology flow. Material and Methods The study population included 1012 posteroanterior CXRs from a single institution. The Attention U-Net DL architecture was used for the automatic calculation of CTR. An observer performance test was conducted to assess the radiologist’s performance in diagnosing cardiomegaly with and without artificial intelligence assistance. Results U-Net model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], specificity >99%, precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. Furthermore, the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Conclusion Our segmentation-based AI model demonstrated high specificity (>99%) and sensitivity (80%) for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with provision of AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows by reducing radiologists’ burden and alerting to an abnormal enlarged heart early on.
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Hammadi, Noora, Palmira Caparrotti, Saju Divakar, Mohamed Riyas, Suparna Halsnad Chandramouli, Rabih Hammoud, Jillian Hayes, Maeve Mc Garry, Satheesh Prasad Paloor, and Primoz Petric. "MRI reduces variation of contouring for boost clinical target volume in breast cancer patients without surgical clips in the tumour bed." Radiology and Oncology 51, no. 2 (May 24, 2017): 160–68. http://dx.doi.org/10.1515/raon-2017-0014.

Full text
Abstract:
Abstract Background Omitting the placement of clips inside tumour bed during breast cancer surgery poses a challenge for delineation of lumpectomy cavity clinical target volume (CTVLC). We aimed to quantify inter-observer variation and accuracy for CT- and MRI-based segmentation of CTVLC in patients without clips. Patients and methods CT- and MRI-simulator images of 12 breast cancer patients, treated by breast conserving surgery and radiotherapy, were included in this study. Five radiation oncologists recorded the cavity visualization score (CVS) and delineated CTVLC on both modalities. Expert-consensus (EC) contours were delineated by a senior radiation oncologist, respecting opinions of all observers. Inter-observer volumetric variation and generalized conformity index (CIgen) were calculated. Deviations from EC contour were quantified by the accuracy index (AI) and inter-delineation distances (IDD). Results Mean CVS was 3.88 +/− 0.99 and 3.05 +/− 1.07 for MRI and CT, respectively (p = 0.001). Mean volumes of CTVLC were similar: 154 +/− 26 cm3 on CT and 152 +/− 19 cm3 on MRI. Mean CIgen and AI were superior for MRI when compared with CT (CIgen: 0.74 +/− 0.07 vs. 0.67 +/− 0.12, p = 0.007; AI: 0.81 +/− 0.04 vs. 0.76 +/− 0.07; p = 0.004). CIgen and AI increased with increasing CVS. Mean IDD was 3 mm +/− 1.5 mm and 3.6 mm +/− 2.3 mm for MRI and CT, respectively (p = 0.017). Conclusions When compared with CT, MRI improved visualization of post-lumpectomy changes, reduced interobserver variation and improved the accuracy of CTVLC contouring in patients without clips in the tumour bed. Further studies with bigger sample sizes are needed to confirm our findings.
APA, Harvard, Vancouver, ISO, and other styles
10

Hameed, B. M. Zeeshan, Milap Shah, Nithesh Naik, Sufyan Ibrahim, Bhaskar Somani, Patrick Rice, Naeem Soomro, and Bhavan Prasad Rai. "Contemporary application of artificial intelligence in prostate cancer: an i-TRUE study." Therapeutic Advances in Urology 13 (January 2021): 175628722098664. http://dx.doi.org/10.1177/1756287220986640.

Full text
Abstract:
Artificial intelligence (AI) involves technology that is able to emulate tasks previously carried out by humans. The growing incidence, novel diagnostic strategies and newer available therapeutic options have had resource and economic impacts on the healthcare organizations providing prostate cancer care. AI has the potential to be an adjunct to and, in certain cases, a replacement for human input in prostate cancer care delivery. Automation can also address issues such as inter- and intra-observer variability and has the ability to deliver analysis of large volume datasets quickly and accurately. The continuous training and testing of AI algorithms will facilitate development of futuristic AI models that will have integral roles to play in diagnostics, enhanced training and surgical outcomes and developments of prostate cancer predictive tools. These AI related innovations will enable clinicians to provide individualized care. Despite its potential benefits, it is vital that governance with AI related care is maintained and responsible adoption is achieved.
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, J. W., J. Marsilla, J. Weiss, D. Tkachuk, J. K. Jacinto, J. Cho, E. Hahn, S. Bratman, B. Haibe-Kains, and A. Hope. "OC-0518 Impact of observer knowledge on AI delineation assessments: Bias in clinical acceptability testing." Radiotherapy and Oncology 161 (August 2021): S402—S403. http://dx.doi.org/10.1016/s0167-8140(21)06944-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Dana, Lea Marie Pehrson, Carsten Ammitzbøl Lauridsen, Lea Tøttrup, Marco Fraccaro, Desmond Elliott, Hubert Dariusz Zając, Sune Darkner, Jonathan Frederik Carlsen, and Michael Bachmann Nielsen. "The Added Effect of Artificial Intelligence on Physicians’ Performance in Detecting Thoracic Pathologies on CT and Chest X-ray: A Systematic Review." Diagnostics 11, no. 12 (November 26, 2021): 2206. http://dx.doi.org/10.3390/diagnostics11122206.

Full text
Abstract:
Our systematic review investigated the additional effect of artificial intelligence-based devices on human observers when diagnosing and/or detecting thoracic pathologies using different diagnostic imaging modalities, such as chest X-ray and CT. Peer-reviewed, original research articles from EMBASE, PubMed, Cochrane library, SCOPUS, and Web of Science were retrieved. Included articles were published within the last 20 years and used a device based on artificial intelligence (AI) technology to detect or diagnose pulmonary findings. The AI-based device had to be used in an observer test where the performance of human observers with and without addition of the device was measured as sensitivity, specificity, accuracy, AUC, or time spent on image reading. A total of 38 studies were included for final assessment. The quality assessment tool for diagnostic accuracy studies (QUADAS-2) was used for bias assessment. The average sensitivity increased from 67.8% to 74.6%; specificity from 82.2% to 85.4%; accuracy from 75.4% to 81.7%; and Area Under the ROC Curve (AUC) from 0.75 to 0.80. Generally, a faster reading time was reported when radiologists were aided by AI-based devices. Our systematic review showed that performance generally improved for the physicians when assisted by AI-based devices compared to unaided interpretation.
APA, Harvard, Vancouver, ISO, and other styles
13

Sorrenti, Salvatore, Vincenzo Dolcetti, Maija Radzina, Maria Irene Bellini, Fabrizio Frezza, Khushboo Munir, Giorgio Grani, et al. "Artificial Intelligence for Thyroid Nodule Characterization: Where Are We Standing?" Cancers 14, no. 14 (July 10, 2022): 3357. http://dx.doi.org/10.3390/cancers14143357.

Full text
Abstract:
Machine learning (ML) is an interdisciplinary sector in the subset of artificial intelligence (AI) that creates systems to set up logical connections using algorithms, and thus offers predictions for complex data analysis. In the present review, an up-to-date summary of the current state of the art regarding ML and AI implementation for thyroid nodule ultrasound characterization and cancer is provided, highlighting controversies over AI application as well as possible benefits of ML, such as, for example, training purposes. There is evidence that AI increases diagnostic accuracy and significantly limits inter-observer variability by using standardized mathematical algorithms. It could also be of aid in practice settings with limited sub-specialty expertise, offering a second opinion by means of radiomics and computer-assisted diagnosis. The introduction of AI represents a revolutionary event in thyroid nodule evaluation, but key issues for further implementation include integration with radiologist expertise, impact on workflow and efficiency, and performance monitoring.
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Di, Lubomir Hadjiiski, Ajjai Alva, Yousef Zakharia, Monika Joshi, Heang-Ping Chan, Rohan Garje, et al. "Computerized Decision Support for Bladder Cancer Treatment Response Assessment in CT Urography: Effect on Diagnostic Accuracy in Multi-Institution Multi-Specialty Study." Tomography 8, no. 2 (March 2, 2022): 644–56. http://dx.doi.org/10.3390/tomography8020054.

Full text
Abstract:
This observer study investigates the effect of computerized artificial intelligence (AI)-based decision support system (CDSS-T) on physicians’ diagnostic accuracy in assessing bladder cancer treatment response. The performance of 17 observers was evaluated when assessing bladder cancer treatment response without and with CDSS-T using pre- and post-chemotherapy CTU scans in 123 patients having 157 pre- and post-treatment cancer pairs. The impact of cancer case difficulty, observers’ clinical experience, institution affiliation, specialty, and the assessment times on the observers’ diagnostic performance with and without using CDSS-T were analyzed. It was found that the average performance of the 17 observers was significantly improved (p = 0.002) when aided by the CDSS-T. The cancer case difficulty, institution affiliation, specialty, and the assessment times influenced the observers’ performance without CDSS-T. The AI-based decision support system has the potential to improve the diagnostic accuracy in assessing bladder cancer treatment response and result in more consistent performance among all physicians.
APA, Harvard, Vancouver, ISO, and other styles
15

Gungor, G., G. Klausner, G. Gur, I. Serbez, B. Temur, A. Caffaro, L. Hardy, et al. "PO-1890 AI-based OAR delineation in brain T1w-MRI: Overcoming Inter- and Intra-observer variability." Radiotherapy and Oncology 170 (May 2022): S1674—S1675. http://dx.doi.org/10.1016/s0167-8140(22)03853-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, Sang Hoon, and Yun Jeong Lim. "Artificial Intelligence in Capsule Endoscopy: A Practical Guide to Its Past and Future Challenges." Diagnostics 11, no. 9 (September 20, 2021): 1722. http://dx.doi.org/10.3390/diagnostics11091722.

Full text
Abstract:
Artificial intelligence (AI) has revolutionized the medical diagnostic process of various diseases. Since the manual reading of capsule endoscopy videos is a time-intensive, error-prone process, computerized algorithms have been introduced to automate this process. Over the past decade, the evolution of convolutional neural network (CNN) enabled AI to detect multiple lesions simultaneously with increasing accuracy and sensitivity. Difficulty in validating CNN performance and unique characteristics of capsule endoscopy images make computer-aided reading systems in capsule endoscopy still on a preclinical level. Although AI technology can be used as an auxiliary second observer in capsule endoscopy, it is expected that in the near future, it will effectively reduce the reading time and ultimately become an independent, integrated reading system.
APA, Harvard, Vancouver, ISO, and other styles
17

Kulkarni, Anagha, Siddharth Srivastava, and Subbarao Kambhampati. "A Unified Framework for Planning in Adversarial and Cooperative Environments." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2479–87. http://dx.doi.org/10.1609/aaai.v33i01.33012479.

Full text
Abstract:
Users of AI systems may rely upon them to produce plans for achieving desired objectives. Such AI systems should be able to compute obfuscated plans whose execution in adversarial situations protects privacy, as well as legible plans which are easy for team members to understand in cooperative situations. We develop a unified framework that addresses these dual problems by computing plans with a desired level of comprehensibility from the point of view of a partially informed observer. For adversarial settings, our approach produces obfuscated plans with observations that are consistent with at least k goals from a set of decoy goals. By slightly varying our framework, we present an approach for producing legible plans in cooperative settings such that the observation sequence projected by the plan is consistent with at most j goals from a set of confounding goals. In addition, we show how the observability of the observer can be controlled to either obfuscate or convey the actions in a plan when the goal is known to the observer. We present theoretical results on the complexity analysis of our approach. We also present an empirical evaluation to show the feasibility and usefulness of our approaches using IPC domains.
APA, Harvard, Vancouver, ISO, and other styles
18

Schwabe, Maria, John C. Clohisy, Cecilia Pascual-Garrido, Elizabeth Graesser, and Jeffrey J. Nepple. "SHOULD WE FEAR THE BORDERLINE DYSPLASTIC HIP?: EXTERNAL VALIDATION OF THE FEAR INDEX IN THE SETTING OF BORDERLINE ACETABULAR DYSPLASIA." Orthopaedic Journal of Sports Medicine 8, no. 4_suppl3 (April 1, 2020): 2325967120S0021. http://dx.doi.org/10.1177/2325967120s00212.

Full text
Abstract:
Introduction: The Femoro-Epiphyseal Acetabular Roof (FEAR) index has recently been proposed as a useful tool in identifying hips with instability in the setting of borderline acetabular dysplasia. Beck et al. were the first to describe this parameter and demonstrated a FEAR index as the angle between a line connecting the most medial and lateral part of the sourcil and a line connecting the most medial and lateral part of the straight central third of the physeal scar. Purpose: The purposes of the current study were 1) external validation of intra-observer and inter-observer reliability and 2) to determine the correlation/association of FEAR with a clinical diagnosis of instability. Methods: The current study was a retrospective review of patients diagnosed with borderline acetabular dysplasia by a single surgeon. The study period included January 2008-April 2017 and identified patients with LCEA 20°-25°, via prospectively collected radiographic measurements in a hip preservation database. Inclusion criteria were treatment with either hip arthroscopy or PAO, LCEA of 20°-25°, and 14-40 years of age. Patients were excluded if they had a Tӧnnis grade ≥2, prior ipsilateral hip surgery, residual deformities from SCFE or Perthes. Demographics and radiographic measurements were recorded. Two individuals read all radiographs after obtaining an intra observer reliability of 97% and inter observer reliability of 88%. Sensitivity and specificity were calculated for FEAR predicting instability. A t-test was used to assess correlation of LCEA, acetabular inclination (AI), and ACEA with FEAR. Results: A total of 186 patients were included, FEAR was unable to be assessed in 5% of hips because of inability to visualize the proximal femoral physeal scar. Of the remaining 176 hips, 18% of hips had a FEAR index >5°. FEAR positive mean was 7.6°±2.8° (range=5.1°-17.5°) and FEAR negative mean was –6.9°±6.4° (range=-29.4°-3.7°). Results of FEAR predicting instability was sensitivity=33% (23/70), specificity=92% (98/106), PPV=74% (23/31), and NPV=67% (98/145). The mean AI was significantly high in FEAR(+) hips (AI mean=11.2° and FEAR(-) AI mean=9.5°; p=0.005). No difference in LCEA or ACEA was seen relative to the presence of FEAR. Discussion: In the current study, a positive FEAR index was generally indicative of the presence of clinical instability. However, the FEAR index was poorly sensitive to hip instability with 67% hips diagnosed with instability having a negative FEAR index.
APA, Harvard, Vancouver, ISO, and other styles
19

Spicer, Catherine, Prashanna Khwaounjoo, and Yusuf Ozgur Cakmak. "Human and Human-Interfaced AI Interactions: Modulation of Human Male Autonomic Nervous System via Pupil Mimicry." Sensors 21, no. 4 (February 3, 2021): 1028. http://dx.doi.org/10.3390/s21041028.

Full text
Abstract:
Pupillary alterations in virtual humans induce neurophysiological responses within an observer. Technological advances have enabled rapid developments in artificial intelligence (AI), from verbal systems, to visual AI interfaces with the ability to express, and respond to emotional states of a user. Visual AI interfaces are able to change their physical parameters, such as pupil diameter. Pupillary changes can alter heart rate, however, effects on heart rate variability (HRV) are unknown. HRV, is an autonomic, non-conscious parameter which monitors sympathetic and parasympathetic nervous system (PNS) activity. N = 34 male participants aged between 19–33 were subjected to a number of conditions such as pupil dilation, constriction and blushing. The present research is the first to investigate the effects of virtual human interactions on human HRV. Outcomes of this study were obtained using eye tracking and HRV measurements. Pupil dilation relative to constriction presented in the female virtual partner induced a significant right pupillary diameter increase (p = 0.041) in human observers. Additionally, female virtual partner pupil constriction relative to dilation induced a significant increase in participants’ PNS HRV response (p = 0.036). These findings indicate the ability of a female virtual interaction partner to modulate parasympathetic autonomic functioning in young healthy male humans. This allows first insights into the effects of interacting with virtual AI interaction partners, on human autonomic functioning, and may aid development of future virtual humans, and their implementation into relevant clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
20

Egevad, Lars, Daniela Swanberg, Brett Delahunt, Peter Ström, Kimmo Kartasalo, Henrik Olsson, Dan M. Berney, et al. "Identification of areas of grading difficulties in prostate cancer and comparison with artificial intelligence assisted grading." Virchows Archiv 477, no. 6 (June 15, 2020): 777–86. http://dx.doi.org/10.1007/s00428-020-02858-w.

Full text
Abstract:
AbstractThe International Society of Urological Pathology (ISUP) hosts a reference image database supervised by experts with the purpose of establishing an international standard in prostate cancer grading. Here, we aimed to identify areas of grading difficulties and compare the results with those obtained from an artificial intelligence system trained in grading. In a series of 87 needle biopsies of cancers selected to include problematic cases, experts failed to reach a 2/3 consensus in 41.4% (36/87). Among consensus and non-consensus cases, the weighted kappa was 0.77 (range 0.68–0.84) and 0.50 (range 0.40–0.57), respectively. Among the non-consensus cases, four main causes of disagreement were identified: the distinction between Gleason score 3 + 3 with tangential cutting artifacts vs. Gleason score 3 + 4 with poorly formed or fused glands (13 cases), Gleason score 3 + 4 vs. 4 + 3 (7 cases), Gleason score 4 + 3 vs. 4 + 4 (8 cases) and the identification of a small component of Gleason pattern 5 (6 cases). The AI system obtained a weighted kappa value of 0.53 among the non-consensus cases, placing it as the observer with the sixth best reproducibility out of a total of 24. AI may serve as a decision support and decrease inter-observer variability by its ability to make consistent decisions. The grading of these cancer patterns that best predicts outcome and guides treatment warrants further clinical and genetic studies. Results of such investigations should be used to improve calibration of AI systems.
APA, Harvard, Vancouver, ISO, and other styles
21

De Oliveira, João Gabriel Lopes, Editorial office Pedro Moreira Menezes Da Costa, and Flavio De Mello. "Knowledge Geometry in Phenomenon Perception and Artificial Intelligence." JUCS - Journal of Universal Computer Science 26, no. 5 (May 28, 2020): 604–23. http://dx.doi.org/10.3897/jucs.2020.032.

Full text
Abstract:
Artificial Intelligence (AI) pervades industry, entertainment, transportation, finance, and health. It seems to be in a kind of golden age, but today AI is based on the strength of techniques that bear little relation to the thought mechanism. Contemporary techniques of machine learning, deep learning and case-based reasoning seem to be occupied with delivering functional and optimized solutions, leaving aside the core reasons of why such solutions work. This paper, in turn, proposes a theoretical study of perception, a key issue for knowledge acquisition and intelligence construction. Its main concern is the formal representation of a perceived phenomenon by a casual observer and its relationship with machine intelligence. This work is based on recently proposed geometric theory, and represents an approach that is able to describe the inuence of scope, development paradigms, matching process and ground truth on phenomenon perception. As a result, it enumerates the perception variables and describes the implications for AI.
APA, Harvard, Vancouver, ISO, and other styles
22

Xi, Shibo, Lucas Santiago Borgna, Lirong Zheng, Yonghua Du, and Tiandou Hu. "AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm." Journal of Synchrotron Radiation 24, no. 1 (January 1, 2017): 367–73. http://dx.doi.org/10.1107/s1600577516018117.

Full text
Abstract:
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
APA, Harvard, Vancouver, ISO, and other styles
23

Peyrin-Biroulet, L., S. Adsul, J. Dehmeshki, and O. Kubassova. "DOP58 An artificial intelligence–driven scoring system to measure histological disease activity in Ulcerative Colitis." Journal of Crohn's and Colitis 16, Supplement_1 (January 1, 2022): i105. http://dx.doi.org/10.1093/ecco-jcc/jjab232.097.

Full text
Abstract:
Abstract Background Histological remission is increasingly regarded as an important and deep therapeutic target for ulcerative colitis (UC). Assessment and scoring of histological images is a tedious procedure, that can be imprecise and prone to inter- and intra-observer variability. Therefore, a need exists for an automated method that is accurate, reproducible and reliable. This study aimed to investigate whether an artificial intelligence (AI) system developed using image processing and machine learning algorithms could measure histological disease activity based on the Nancy index. Methods A total of 200 histological images of patients with UC from a database at University Hospital, Vandoeuvre-lès-Nancy, France were used for this study. The novel AI system was used to fully characterise histological images and automatically measure Nancy index. The in-house AI algorithm was developed using state-of-the-art image processing and machine learning algorithms based on deep learning and feature extraction. The cell regions of each image, followed by Nancy index, were manually annotated and measured independently by 3 histopathologists. Manual and AI-automated measurements of Nancy index score were done and assessed using the intraclass correlation coefficient (ICC). Results The 200-image dataset was divided into 2 groups (80% was used for training and 20% for testing). ICC statistical analyses were performed to evaluate AI tool and used as a reference to calculate the accuracy (Table 1). The average ICC amongst the histopathologists was 89.33 and average ICC between histopathologists and AI tool was 87.20. Despite the small number of image data, the AI tool was found to be highly correlated with histopathologists. Conclusion The high correlation of performance of the AI method suggested promising potential for IBD clinical applications. A standardised and validated histological AI-driven scoring system can potentially be used in daily IBD practice to eliminate the subjectivity of the pathologists and assess the disease severity for treatment decision.
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Hyojin, Sangjoon Choi, Seokhwi Kim, Jaehong Aum, Sergio Pereira, Seonwook Park, Minuk Ma, et al. "Clinical performance of artificial intelligence-powered annotation of tumor cell PD-L1 expression for treatment of immune-checkpoint inhibitor (ICI) in advanced non-small cell lung cancer (NSCLC)." Journal of Clinical Oncology 39, no. 15_suppl (May 20, 2021): 9026. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.9026.

Full text
Abstract:
9026 Background: Programmed death ligand 1 (PD-L1) expression is the standard biomarker for first line ICI in advanced NSCLC. However, manual evaluation of tumor proportion score (TPS) by pathologists has practical limitations including intra/inter-observer bias, variation in subjectivity on area of interest and intensive labor. We developed an artificial intelligence (AI)-powered TPS analyzer, namely Lunit SCOPE PD-L1, for objective annotation of tumor cell PD-L1 expression for prediction of ICI response in advanced NSCLC. Methods: Lunit SCOPE PD-L1 was developed by a total of 393,565 tumor cells annotated by board-certified pathologists for PD-L1 expression in 802 whole-slide images (WSI) stained by 22C3 pharmDx immunohistochemistry. A After excluding the in-house control tissue regions, the WSI were divided into patches, from which a deep learning-based model detected the location and PD-L1 positivity of tumor cells. The patch-level cell predictions were aggregated for TPS estimation. Clinical performance of the model was validated in an external cohort of 430 NSCLC tumor slides from patients treated with ≥ ICI at Seoul National University Bundang Hospital and Samsung Medical Center. Independent control TPS annotation of this external validation cohort was performed by three pathologists, and their consensus TPS was calculated by mean value of such. Results: AI-model (Lunit SCOPE PD-L1) predicts PD-L1-positive tumor cell with the area under the curves of 0.889 and PD-L1-negative tumor cells with that of 0.809 at cell-level analysis. At WSI-level, significant positive correlation was observed between TPS by AI model and control TPS by pathologists (Spearman coefficient = 0.9247, P < 0.001). Concordance rate between AI-model and control TPS by pathologists according to expression level of PD-L1 ≥ 50%, 1-49%, and < 1% status was 85.7%, 89.3%, and 52.4%, respectively. Median progression-free survival (mPFS) according to TPS by AI model ≥ 1% vs. < 1% were 2.8 vs. 1.7 months (hazard ratio, HR, 0.52, 95% confidence interval, CI, 0.38-0.71, P < 0.001). In contrast, mPFS according to control TPS was 2.8 vs. 2.1 months (HR 0.70, 95% CI 0.55-0.91, P < 0.001). Forty out of 84 patients (47.6%) annotated as control TPS < 1% by pathologists were considered as TPS ≥ 1% by AI-model and mPFS of this subgroup was 2.7 months. Conclusions: PD-L1 expression by AI-model correlates with PD-L1 expression by pathologists. Clinical performance of AI-model in WSI-level is comparable with assessment by pathologists. The AI-model can accurately predict tumor response and progression-free survival of ICI in advanced NSCLC.
APA, Harvard, Vancouver, ISO, and other styles
25

Peyrin-Biroulet, L., D. Rubin, C. Weber, S. Adsul, M. Freire, L. Biedermann, V. Koelzer, et al. "P777 Deployment of an artificial intelligence tool for precision medicine in ulcerative colitis: Preliminary data from 8 globally distributed clinical sites." Journal of Crohn's and Colitis 17, Supplement_1 (January 30, 2023): i910—i911. http://dx.doi.org/10.1093/ecco-jcc/jjac190.0907.

Full text
Abstract:
Abstract Background Histological remission is an important target for Ulcerative Colitis (UC) treatment; however, scoring of histological images is time-consuming and prone to inter and intra-observer variability. Thus, a need exists for an accurate, reproducible, and reliable automated method. Previously, we demonstrated an Artificial Intelligence (AI) Tool using image processing and machine learning algorithms to measure histological disease activity using the Nancy index consistently and accurately.1 Here, we aim to enhance the capabilities of the AI Tool, by adding substantially more population-diversified training data while maintaining accuracy and robustness of results. Methods Eight global sites submitted 600 UC histological images. These were added to the 200 images previously used to train and validate the AI Tool. The 800-image dataset was divided into 2 groups: 90% used for training, 10% for testing. The novel AI algorithms were trained using state-of-the-art image processing and machine learning techniques based on deep learning and feature extraction. Cell and tissue regions of each training image were manually annotated, measured, and assigned a Nancy Index independently by 3 histopathologists, and used to further train the AI using over 43,000 characterisations. The AI Tool fully characterises histological images, identifying tissue types, cell types, cell numbers and locations, and automatically measures the Nancy Index for each image. Intra Class Correlation (ICC) and Confusion Matrix analyses were performed to evaluate the AI Tool and assess accuracy. Results The average ICC was 92.1% among the histopathologists and 91.1% between histopathologists and AI Tool, compared with 88.3% and 87.2% in the previous study.1 Confusion matrix analysis (Table 1) demonstrated the strongest correlation at the extremes of the Nancy Index, with 80% correlation between predicted and true labels for Nancy Scores of 0 or 4. When 2 adjacent scores were combined, correlations were stronger: 96% for a true Nancy score of 0 being predicted as 0 or 1, and 100% for a true Nancy score of 2 being predicted as 2 or 3. Conclusion By adding a larger number of images to the AI Tool training data, the robustness of the AI Tool was substantially improved while maintaining accuracy. The continued high correlation of AI Tool performance with the histopathologists reinforces the potential role for the AI Tool for IBD clinical applications. Fully characterising whole slides could standardise and validate an AI-driven scoring system for histology slides in IBD, eliminating the subjectivity of the human pathologist in assessment of disease activity. References: 1. Peyrin-Biroulet L, et al. J Crohn's and Colitis. 2022;16(Suppl 1):i105.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Zisang, Ye Zhu, Manwei Liu, Ziming Zhang, Yang Zhao, Xin Yang, Mingxing Xie, and Li Zhang. "Artificial Intelligence-Enhanced Echocardiography for Systolic Function Assessment." Journal of Clinical Medicine 11, no. 10 (May 20, 2022): 2893. http://dx.doi.org/10.3390/jcm11102893.

Full text
Abstract:
The accurate assessment of left ventricular systolic function is crucial in the diagnosis and treatment of cardiovascular diseases. Left ventricular ejection fraction (LVEF) and global longitudinal strain (GLS) are the most critical indexes of cardiac systolic function. Echocardiography has become the mainstay of cardiac imaging for measuring LVEF and GLS because it is non-invasive, radiation-free, and allows for bedside operation and real-time processing. However, the human assessment of cardiac function depends on the sonographer’s experience, and despite their years of training, inter-observer variability exists. In addition, GLS requires post-processing, which is time consuming and shows variability across different devices. Researchers have turned to artificial intelligence (AI) to address these challenges. The powerful learning capabilities of AI enable feature extraction, which helps to achieve accurate identification of cardiac structures and reliable estimation of the ventricular volume and myocardial motion. Hence, the automatic output of systolic function indexes can be achieved based on echocardiographic images. This review attempts to thoroughly explain the latest progress of AI in assessing left ventricular systolic function and differential diagnosis of heart diseases by echocardiography and discusses the challenges and promises of this new field.
APA, Harvard, Vancouver, ISO, and other styles
27

Odani, Erina, Tetsu Hayashida, masayuki kikuchi, Aiko Nagayama, tomoko seki, maiko takahashi, Akiko Matsumoto, et al. "Abstract P1-05-06: Establishment of the breast ultrasound support system using deep-learning system." Cancer Research 83, no. 5_Supplement (March 1, 2023): P1–05–06—P1–05–06. http://dx.doi.org/10.1158/1538-7445.sabcs22-p1-05-06.

Full text
Abstract:
Abstract Although the categorization of ultrasound using the Breast Imaging Reporting and Data System (BI-RADS) has become widespread worldwide, the problem of inter-observer variability remains. To maintain uniformity in diagnostic accuracy, we have developed a novel artificial intelligence (AI) system in which AI can distinguish whether a static image obtained using a breast ultrasound represents BI-RADS3 or lower, or BI-RADS4a or higher, to determine the medical management that should be performed on a patient whose breast ultrasound shows abnormalities. To establish and validate the AI system, a training dataset consisting of 4,028 images containing 5,014 lesions and a test dataset consisting of 3,166 images containing 3,656 lesions were collected and annotated. We selected a setting that maximized the area under the curve (AUC) and minimized the difference in sensitivity and specificity by adjusting the internal parameters of the AI system, achieving an AUC, sensitivity, and specificity of 0.95, 90.0%, and 88.5%, respectively. Furthermore, based on 30 images extracted from the test data, the diagnostic accuracy of 20 clinicians and the AI system was compared, and the AI system was found to be significantly superior to the clinicians (McNemar test, p &lt; 0.001). Then, we conducted a trial to introduce the system for use in clinical practice. Physicians reviewed the images and determined whether they were BI-RADS3 or lower, or BI-RADS4a or higher. Next, the classification was performed again for the same images concerning the AI diagnosis. At this time, the initial judgment was allowed to be overturned. We checked whether there was any difference in the diagnostic accuracy, sensitivity, and specificity before and after reviewing to the AI diagnosis. Reviews by 24 physicians were evaluated: 4 Japanese Breast Cancer Society breast specialists, 5 non-specialists and physicians with experience treating more than 40 cases of breast cancer, and 15 non-specialists and physicians with no experience treating more than 40 cases of breast cancer. The average rate of accuracy before confirming the AI diagnosis increased to 73.1% after confirming the AI diagnosis (p=0.00548), compared to 69.3% on average before the AI diagnosis. Compared to practice experience, the accuracy increased from an average of 77.1% to 79.6% for the 9 physicians who were breast specialists or who had treated 40 or more cases of breast cancer. For the 15 physicians with less than 40 breast cancer cases, the average rate of accuracy increased from 64.7% to 69.2%. Furthermore, sensitivity increased significantly to an average of 99.7% after reviewing of the AI diagnosis from an average of 88.8% prior to reviewing the AI-diagnosis.(p&lt; 0.01). Specificity increased from an average of 62.4% to 63.8% (p=0.433) after reviewing AI diagnosis. We showed that our AI system, when applied to clinical practice and used by physicians, contributes to the improvement of diagnostic accuracy. Our results indicated that our AI diagnostic system was sufficiently accurate to be used in the clinical practice. Citation Format: Erina Odani, Tetsu Hayashida, masayuki kikuchi, Aiko Nagayama, tomoko seki, maiko takahashi, Akiko Matsumoto, Takeshi Murata, Rurina Watanuki, Takamichi Yokoe, Ayako Nakashoji, Hinako Maeda, Tatsuya Onishi, Sota Asaga, Takashi Hojo, Hiromitsu Jinno, Keiichi Sotome, Akira Matsui, Akihiko Suto, Shigeru Imoto, Yuko Kitagawa. Establishment of the breast ultrasound support system using deep-learning system [abstract]. In: Proceedings of the 2022 San Antonio Breast Cancer Symposium; 2022 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2023;83(5 Suppl):Abstract nr P1-05-06.
APA, Harvard, Vancouver, ISO, and other styles
28

Musulin, Jelena, Daniel Štifanić, Ana Zulijani, Tomislav Ćabov, Andrea Dekanić, and Zlatan Car. "An Enhanced Histopathology Analysis: An AI-Based System for Multiclass Grading of Oral Squamous Cell Carcinoma and Segmenting of Epithelial and Stromal Tissue." Cancers 13, no. 8 (April 8, 2021): 1784. http://dx.doi.org/10.3390/cancers13081784.

Full text
Abstract:
Oral squamous cell carcinoma is most frequent histological neoplasm of head and neck cancers, and although it is localized in a region that is accessible to see and can be detected very early, this usually does not occur. The standard procedure for the diagnosis of oral cancer is based on histopathological examination, however, the main problem in this kind of procedure is tumor heterogeneity where a subjective component of the examination could directly impact patient-specific treatment intervention. For this reason, artificial intelligence (AI) algorithms are widely used as computational aid in the diagnosis for classification and segmentation of tumors, in order to reduce inter- and intra-observer variability. In this research, a two-stage AI-based system for automatic multiclass grading (the first stage) and segmentation of the epithelial and stromal tissue (the second stage) from oral histopathological images is proposed in order to assist the clinician in oral squamous cell carcinoma diagnosis. The integration of Xception and SWT resulted in the highest classification value of 0.963 (σ = 0.042) AUCmacro and 0.966 (σ = 0.027) AUCmicro while using DeepLabv3+ along with Xception_65 as backbone and data preprocessing, semantic segmentation prediction resulted in 0.878 (σ = 0.027) mIOU and 0.955 (σ = 0.014) F1 score. Obtained results reveal that the proposed AI-based system has great potential in the diagnosis of OSCC.
APA, Harvard, Vancouver, ISO, and other styles
29

Corradini, Daniele, Leonardo Brizi, Caterina Gaudiano, Lorenzo Bianchi, Emanuela Marcelli, Rita Golfieri, Riccardo Schiavina, Claudia Testa, and Daniel Remondini. "Challenges in the Use of Artificial Intelligence for Prostate Cancer Diagnosis from Multiparametric Imaging Data." Cancers 13, no. 16 (August 5, 2021): 3944. http://dx.doi.org/10.3390/cancers13163944.

Full text
Abstract:
Many efforts have been carried out for the standardization of multiparametric Magnetic Resonance (mp-MR) images evaluation to detect Prostate Cancer (PCa), and specifically to differentiate levels of aggressiveness, a crucial aspect for clinical decision-making. Prostate Imaging—Reporting and Data System (PI-RADS) has contributed noteworthily to this aim. Nevertheless, as pointed out by the European Association of Urology (EAU 2020), the PI-RADS still has limitations mainly due to the moderate inter-reader reproducibility of mp-MRI. In recent years, many aspects in the diagnosis of cancer have taken advantage of the use of Artificial Intelligence (AI) such as detection, segmentation of organs and/or lesions, and characterization. Here a focus on AI as a potentially important tool for the aim of standardization and reproducibility in the characterization of PCa by mp-MRI is reported. AI includes methods such as Machine Learning and Deep learning techniques that have shown to be successful in classifying mp-MR images, with similar performances obtained by radiologists. Nevertheless, they perform differently depending on the acquisition system and protocol used. Besides, these methods need a large number of samples that cover most of the variability of the lesion aspect and zone to avoid overfitting. The use of publicly available datasets could improve AI performance to achieve a higher level of generalizability, exploiting large numbers of cases and a big range of variability in the images. Here we explore the promise and the advantages, as well as emphasizing the pitfall and the warnings, outlined in some recent studies that attempted to classify clinically significant PCa and indolent lesions using AI methods. Specifically, we focus on the overfitting issue due to the scarcity of data and the lack of standardization and reproducibility in every step of the mp-MR image acquisition and the classifier implementation. In the end, we point out that a solution can be found in the use of publicly available datasets, whose usage has already been promoted by some important initiatives. Our future perspective is that AI models may become reliable tools for clinicians in PCa diagnosis, reducing inter-observer variability and evaluation time.
APA, Harvard, Vancouver, ISO, and other styles
30

Sperl, André, Raban Arved Heller, Bahram Biglari, Patrick Haubruck, Julian Seelig, Lutz Schomburg, Tobias Bock, and Arash Moghaddam. "The Role of Magnesium in the Secondary Phase After Traumatic Spinal Cord Injury. A Prospective Clinical Observer Study." Antioxidants 8, no. 11 (October 24, 2019): 509. http://dx.doi.org/10.3390/antiox8110509.

Full text
Abstract:
In the secondary injury phase after traumatic spinal cord injury (TSCI), oxidative stress and neuroinflammatory responses at the site of injury constitute crucial factors controlling damage extent and may serve as potential therapeutic targets. We determined Magnesium (Mg) serum concentration dynamics in context with the potential of neurological remission in patients with TSCI as Mg is suspected to limit the production of reactive oxygen species and reduce lipid peroxidation. A total of 29 patients with acute TSCI were enrolled, and blood samples were drawn over 3 months at 11 time-points and Mg quantification was performed. Patients were divided into those with (G1, n = 18) or without neurological remission (G0, n = 11). Results show a slight drop in Mg level during the first 4 h after injury, then remained almost unchanged in G1, but increased continuously during the first 7 days after injury in G0. At day 7 Mg concentrations in G1 and G0 were significantly different (p = 0.039, G0 > G1). Significant differences were detected between patients in G1 that presented an AIS (ASIA Impairment Scale) conversion of 1 level versus those with more than 1 level (p = 0.014, G1 AIS imp. = +1 > G1 AI imp. > +1). Low and decreasing levels of Mg within the first 7 days are indicative of a high probability of neurological remission, whereas increasing levels are associated with poor neurological outcome.
APA, Harvard, Vancouver, ISO, and other styles
31

Markland, M. L., M. J. Goering, J. M. Mumm, C. K. Jones, A. R. Crane, and L. E. Hulbert. "The development of a noninvasive behavioral test for assessment of goat–human interactions." Translational Animal Science 3, Supplement_1 (December 1, 2019): 1812. http://dx.doi.org/10.1093/tas/txz066.

Full text
Abstract:
Abstract The objective was to develop a protocol for a noninvasive behavioral test to assess goat–human interactions. Boer goats (n = 45) were housed in groups of 3 at the K-State Sheep and Meat Goat Unit. A 3-min human approach test (HAT) was conducted after their first- and seventh-handling experience (i.e., moved by group through the chute and scale). Video footage was analyzed using specialized software (Observer 11.5 XT, Leesburg, VA, USA). The ethogram included three main categories (mutually exclusive within category): 1) spatial (close, middle, far); 2) orientation (facing vs. turned-away); and 3) structural (lie, stand, and nutritive and nonnutritive oral behaviors). The durations of these behavior outcomes were converted to percent, and then applied to a weighted formula to create an approach index (AI). This index placed behaviors on a 0 to 100% scale (0 = lying in back of the pen; 100 = closest to the human, performing oral behaviors). The Proc Univariate (SAS v.9.3, SAS Inst. Inc., Cary, NC, USA) was used to evaluate descriptive statistics and calculate sample size for future projects from all behavior outcomes. In addition, each goat AI categorized as great approach (GA; ≥75% quartile), moderate approach (MA; 25% to 75% quartiles), or least approach (LA; ≤ 25%) and the 1st vs. 7th handling experience was evaluated using Chi-square (χ 2) analysis. The CV% was low for the AI outcome; therefore, less animals are needed if the AI is used rather than other behavior outcomes used. After the first-handling experience, there were less GA-goats than what was expected from the χ 2 analysis (χ 2 = 17.6; P ≤ 0.01; residual = 0.26; expected = 24.5). After the seventh-handling experience, there were more MA-goats than expected (χ 2 = 17.6; P ≤ 0.01; residual = 1.92; expected = 52.0). Repeated handling appeared to moderately improve goat approach to humans, which indicates that this protocol is a promising behavioral test to assess welfare of goats.
APA, Harvard, Vancouver, ISO, and other styles
32

Park, Jeong Hwan, Kyu Sang Lee, Euno Choi, Wonkyung Jung, Jaehong Aum, Sergio Pereira, Seonwook Park, et al. "Pathologic validation of artificial intelligence-powered prediction of combined positive score of PD-L1 immunohistochemistry in urothelial carcinoma." Journal of Clinical Oncology 39, no. 15_suppl (May 20, 2021): e16518-e16518. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.e16518.

Full text
Abstract:
e16518 Background: Programmed death ligand 1 (PD-L1) expression is a reliable biomarker of immune-checkpoint inhibitors (ICI) in multiple cancer types including urothelial carcinoma (UC). A 22C3 pharmDx immunohistochemistry was particularly determined by using the combined positive score (CPS) in UC. A challenging issue regarding the manual scoring of CPS by a pathologist is in determining the representative area to read. This requires substantial time and effort and may lead to inter-observer variation. We developed an artificial intelligence (AI)-powered CPS analyzer, to assess CPS in whole-slide images (WSI) and validated its performance by comparing against a consensus of pathologists’ readings. Methods: An AI-powered CPS analyzer, Lunit SCOPE PD-L1, has been trained and validated based on a total of 3,326,402 tumor cells, lymphocytes, and macrophages annotated by board-certified pathologists for PD-L1 positivity in 1200 WSI stained by 22C3. After excluding the in-house control tissue regions, the WSIs were divided into patches, from which a deep learning-based model was trained to detects the location and PD-L1 positivity of tumor cells, lymphocytes, and macrophages, respectively. Finally, the patch-level cell predictions were aggregated for CPS estimation. The performance of the model was validated on an external validation UC cohort consisting of two institutions: Boramae Medical Center (BMC, n = 93) and Seoul National University Bundang Hospital (SNUBH, n = 100). Three uropathologists independently annotated the CPS of the external validation cohorts, and a consensus of CPS was determined by determination of their mean values. Results: The AI-model predicts CPS accurately in an internal validation cohort as the area under the curves (AUC) values to predict PD-L1-positive tumor cell, PD-L1-positive lymphocytes or macrophages, PD-L1-negative tumor cell, and PD-L1-negative lymphocytes or macrophages were 0.929, 0.855, 0.885, and 0.872, respectively. There was a significant positive correlation between CPS by AI-model and consensus CPS by 3 pathologists in the external validation cohort (Spearman coefficient = 0.914, P < 0.001). Concordance of AI-model and pathologists' consensus to call CPS ≥ 10 was 88.1%, which was similar to that of either 2 of 3 pathologists (84.5%, 86.5%, and 90.7%). The concordance rate was not significantly different according to data source (BMC: 88.2% versus SNUBH: 88.0%, P = 1.00), but was significantly different according to type of surgery [surgical resection (cystectomy, nephrectomy, and ureterectomy): 92.3% versus transurethral resection: 81.3%, P = 0.0244]. Conclusions: Lunit SCOPE PD-L1, AI-powered CPS analyzer, can detect PD-L1 expression in tumor cells, lymphocytes or macrophages highly accurately compared to uropathologists.
APA, Harvard, Vancouver, ISO, and other styles
33

Genç, A. C., F. Turkoglu Genc, A. B. Kara, L. Genc Kaya, Z. Ozturk, D. Karatas, Y. Gunduz, and E. Gönüllü. "AB1161 ARTIFICIAL INTELLIGENCE FOR RHEUMATOLOGY." Annals of the Rheumatic Diseases 79, Suppl 1 (June 2020): 1871–72. http://dx.doi.org/10.1136/annrheumdis-2020-eular.5318.

Full text
Abstract:
Background:Magnetic resonance imaging (MRI) of sacroiliac (SI) joints is used to detect early sacroiliitis(1). There can be an interobserver disagreement in MRI findings of SI joints of spondyloarthropathy patients between a rheumatologist, a local radiologist, and an expert radiologist(2). Artificial Intelligence and deep learning methods to detect abnormalities have become popular in radiology and other medical fields in recent years(3). Search for “artificial intelligence” and “radiology” in Pubmed for the last five years returned around 1500 clinical studies yet no results were retrieved for “artificial intelligence” and “rheumatology”.Objectives:Artificial Intelligence (AI) can help to detect the pathological area like sacroiliitis or not and also allows us to characterize it as quantitatively rather than qualitatively in the SI-MRI.Methods:Between the years of 2015 and 2019, 8100 sacroiliac MRIs were taken at our center. The MRIs of 1150 patients who were reported as active or chronic sacroiliitis from these sacroiliac MRIs or whose MRIs were considered by the primary physician in favor of sacroiliitis was included in the study. 1441 MRI coronal STIR sequence of 1150 patients were tagged as ‘’active sacroiliitis’’ and trained to detect and localize active sacroiliitis and provide prediction performance. This model is available for various operating systems. (Image1)Results:Precision score, the percentage of sacroiliac images of the trained model, is 87.1%. Recall, the percentage of the total sacroiliac MRIs correctly classified by the model, is 82.1% and the mean average precision (mAP) of the model is 89%.Conclusion:There are gray areas in medicine like sacroiliitis. Inter-observer variability can be reduced by AI and deep learning methods. The efficiency and reliability of health services can be increased in this way.References:[1]Jans L, Egund N, Eshed I, Sudoł-Szopińska I, Jurik AG. Sacroiliitis in Axial Spondyloarthritis: Assessing Morphology and Activity. Semin Musculoskelet Radiol. 2018;22: 180–188.[2]B. Arnbak, T. S. Jensen, C. Manniche, A. Zejden, N. Egund, and A. G. Jurik, “Spondyloarthritis-related and degenerative MRI changes in the axial skeleton—an inter- and intra-observer agreement study,”BMC Musculoskeletal Disorders, vol. 14, article 274, 2013.[3]Rueda, Juan C et al. “Interobserver Agreement in Magnetic Resonance of the Sacroiliac Joints in Patients with Spondyloarthritis.”International journal of rheumatology(2017).Image1.Bilateral active sacroiliitis detected automatically by AI model (in right sacroiliac joint 75.6%> (50%), in left sacroiliac joint 65% (>50%))Disclosure of Interests:None declared
APA, Harvard, Vancouver, ISO, and other styles
34

Alarcón-Soldevilla, Fernando, Francisco José Hernández-Gómez, Juan Antonio García-Carmona, Celia Campoy Carreño, Ramon Grimalt, Sergio Vañó-Galvan, José Pardo Sánchez, et al. "Use of Artificial Intelligence as a Predictor of the Response to Treatment in Alopecia Areata." Iproceedings 6, no. 1 (December 10, 2021): e35433. http://dx.doi.org/10.2196/35433.

Full text
Abstract:
Background Artificial intelligence (AI) has emerged in dermatology with some studies focusing on skin disorders such as skin cancer, atopic dermatitis, psoriasis, and onychomycosis. Alopecia areata (AA) is a dermatological disease whose prevalence is 0.7%-3% in the United States, and is characterized by oval areas of nonscarring hair loss of the scalp or body without evident clinical variables to predict its response to the treatment. Nonetheless, some studies suggest a predictive value of trichoscopic features in the evaluation of treatment responses. Assuming that black dots, broken hairs, exclamation marks, and tapered hairs are markers of negative predictive value of the treatment response, while yellow dots are markers of no response to treatment according to recent studies, the absence of these trichoscopic features could indicate favorable disease evolution without treatment or even predict its response. Nonetheless, no studies have reportedly evaluated the role of AI in AA on the basis of trichoscopic features. Objective This study aimed to develop an AI algorithm to predict, using trichoscopic images, those patients diagnosed with AA with a better disease evolution. Methods In total, 80 trichoscopic images were included and classified in those with or without features of negative prognosis. Using a data augmentation technique, they were multiplied to 179 images to train an AI algorithm, as previously carried out with dermoscopic images of skin tumors with a favorable response. Subsequently, 82 new images of AA were presented to the algorithm, and the algorithm classified these patients as responders and non-responders; this process was reviewed by an expert trichologist observer and presented a concordance higher than 90% with the algorithm identifying structures described previously. Evolution of the cases was followed up to truly determine their response to treatment and, therefore, to assess the predictive value of the algorithm. Results In total, 32 of 40 (80%) images of patients predicted as nonresponders scarcely showed response to the treatment, while 34 of 42 (81%) images of those predicted as responders showed a favorable response to the treatment. Conclusions The development of an AI algorithm or tool could be useful to predict AA evolution and its response to treatment. However, further research is needed, including larger sample images or trained algorithms, by using images previously classified in accordance with the disease evolution and not with trichoscopic features. Conflicts of Interest None declared.
APA, Harvard, Vancouver, ISO, and other styles
35

Saboury, Babak, Lars Edenbrandt, Reza Piri, Oke Gerke, Tom Werner, Armin Arbab-Zadeh, Abass Alavi, and Poul Flemming Høilund-Carlsen. "Alavi–Carlsen Calcification Score (ACCS): A Simple Measure of Global Cardiac Atherosclerosis Burden." Diagnostics 11, no. 8 (August 5, 2021): 1421. http://dx.doi.org/10.3390/diagnostics11081421.

Full text
Abstract:
Multislice cardiac CT characterizes late stage macrocalcification in epicardial arteries as opposed to PET/CT, which mirrors early phase arterial wall changes in epicardial and transmural coronary arteries. With regard to tracer, there has been a shift from using mainly 18F-fluorodeoxyglucose (FDG), indicating inflammation, to applying predominantly 18F-sodium fluoride (NaF) due to its high affinity for arterial wall microcalcification and more consistent association with cardiovascular risk factors. To make NaF-PET/CT an indispensable adjunct to clinical assessment of cardiac atherosclerosis, the Alavi–Carlsen Calcification Score (ACCS) has been proposed. It constitutes a global assessment of cardiac atherosclerosis burden in the individual patient, supported by an artificial intelligence (AI)-based approach for fast observer-independent segmentation. Common measures for characterizing epicardial coronary atherosclerosis by NaF-PET/CT as the maximum standardized uptake value (SUV) or target-to-background ratio are more versatile, error prone, and less reproducible than the ACCS, which equals the average cardiac SUV. The AI-based approach ensures a quick and easy delineation of the entire heart in 3D to obtain the ACCS expressing ongoing global cardiac atherosclerosis, even before it gives rise to CT-detectable coronary calcification. The quantification of global cardiac atherosclerotic burden by the ACCS is suited for management triage and monitoring of disease progression with and without intervention.
APA, Harvard, Vancouver, ISO, and other styles
36

De Filippis, Raffaele, Georg Wölflein, In Hwa Um, Peter D. Caie, Sarah Warren, Andrew White, Elizabeth Suen, Emily To, Ognjen Arandjelović, and David J. Harrison. "Use of High-Plex Data Reveals Novel Insights into the Tumour Microenvironment of Clear Cell Renal Cell Carcinoma." Cancers 14, no. 21 (November 1, 2022): 5387. http://dx.doi.org/10.3390/cancers14215387.

Full text
Abstract:
Although immune checkpoint inhibitors (ICIs) have significantly improved the oncological outcomes, about one-third of patients affected by clear cell renal cell carcinoma (ccRCC) still experience recurrence. Current prognostic algorithms, such as the Leibovich score (LS), rely on morphological features manually assessed by pathologists and are therefore subject to bias. Moreover, these tools do not consider the heterogeneous molecular milieu present in the Tumour Microenvironment (TME), which may have prognostic value. We systematically developed a semi-automated method to investigate 62 markers and their combinations in 150 primary ccRCCs using Multiplex Immunofluorescence (mIF), NanoString GeoMx® Digital Spatial Profiling (DSP) and Artificial Intelligence (AI)-assisted image analysis in order to find novel prognostic signatures and investigate their spatial relationship. We found that coexpression of cancer stem cell (CSC) and epithelial-to-mesenchymal transition (EMT) markers such as OCT4 and ZEB1 are indicative of poor outcome. OCT4 and the immune markers CD8, CD34, and CD163 significantly stratified patients at intermediate LS. Furthermore, augmenting the LS with OCT4 and CD34 improved patient stratification by outcome. Our results support the hypothesis that combining molecular markers has prognostic value and can be integrated with morphological features to improve risk stratification and personalised therapy. To conclude, GeoMx® DSP and AI image analysis are complementary tools providing high multiplexing capability required to investigate the TME of ccRCC, while reducing observer bias.
APA, Harvard, Vancouver, ISO, and other styles
37

Yue, Dong, Cheng Fan Rong, Cai Ning, Hu Liang, Liu Ai Lian, Wang Ru Xin, and Luo Ya Hong. "Reduction of metal artifacts from unilateral hip arthroplasty on dual-energy CT with metal artifact reduction software." Acta Radiologica 59, no. 7 (September 12, 2017): 853–60. http://dx.doi.org/10.1177/0284185117731475.

Full text
Abstract:
Background The evaluation of hip arthroplasty is a challenge in computed tomography (CT). The virtual monochromatic spectral (VMS) images with metal artifact reduction software (MARs) in spectral CT can reduce the artifacts and improve the image quality. Purpose To evaluate the effects of VMS images and MARs for metal artifact reduction in patients with unilateral hip arthroplasty. Material and Methods Thirty-five patients underwent dual-energy CT. Four sets of VMS images without MARs and four sets of VMS images with MARs were obtained. Artifact index (AI), CT number, and SD value were assessed at the periprosthetic region and the pelvic organs. The scores of two observers for different images and the inter-observer agreement were evaluated. Results The AIs in 120 and 140 keV images were significantly lower than those in 80 and 100 keV images. The AIs of the periprosthetic region in VMS images with MARs were significantly lower than those in VMS images without MARs, while the AIs of pelvic organs were not significantly different. VMS images with MARs improved the accuracy of CT numbers for the periprosthetic region. The inter-observer agreements were good for all the images. VMS images with MARs at 120 and 140 keV had higher subjective scores and could improve the image quality, leading to reliable diagnosis of prosthesis-related problems. Conclusion VMS images with MARs at 120 and 140 keV could significantly reduce the artifacts from hip arthroplasty and improve the image quality at the periprosthetic region but had no obvious advantage for pelvic organs.
APA, Harvard, Vancouver, ISO, and other styles
38

Amasya, Hakan, Derya Yildirim, Turgay Aydogan, Nazan Kemaloglu, and Kaan Orhan. "Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: comparison of machine learning classifier models." Dentomaxillofacial Radiology 49, no. 5 (July 2020): 20190441. http://dx.doi.org/10.1259/dmfr.20190441.

Full text
Abstract:
Objectives: This study aimed to develop five different supervised machine learning (ML) classifier models using artificial intelligence (AI) techniques and to compare their performance for cervical vertebral maturation (CVM) analysis. A clinical decision support system (CDSS) was developed for more objective results. Methods: A total of 647 digital lateral cephalometric radiographs with visible C2, C3, C4 and C5 vertebrae were chosen. Newly developed software was used for manually labelling the samples, with the integrated CDSS developed by evaluation of 100 radiographs. On each radiograph, 26 points were marked, and the CDSS generated a suggestion according to the points and CVM analysis performed by the human observer. For each sample, 54 features were saved in text format and classified using logistic regression (LR), support vector machine, random forest, artificial neural network (ANN) and decision tree (DT) models. The weighted κ coefficient was used to evaluate the concordance of classification and expert visual evaluation results. Results: Among the CVM stage classifier models, the best result was achieved using the ANN model (κ = 0.926). Among cervical vertebrae morphology classifier models, the best result was achieved using the LR model (κ = 0.968) for the presence of concavity, and the DT model (κ = 0.949) for vertebral body shapes. Conclusions: This study has proposed ML models for CVM assessment on lateral cephalometric radiographs, which can be used for the prediction of cervical vertebrae morphology. Further studies should be done especially of forensic applications of AI models through CVM evaluations.
APA, Harvard, Vancouver, ISO, and other styles
39

Jaremko, J. L., B. Felfeliyan, A. Rakkunedeth, B. Thejeel, V. Quinn-Laurin, M. Østergaard, P. G. Conaghan, R. Lambert, J. Ronsky, and W. P. Maksymowych. "AB0594 IMPROVING OSTEOARTHRITIS CARE BY AUTOMATIC MEASUREMENT OF HIP EFFUSION USING AI." Annals of the Rheumatic Diseases 80, Suppl 1 (May 19, 2021): 1334.1–1334. http://dx.doi.org/10.1136/annrheumdis-2021-eular.2196.

Full text
Abstract:
Background:Osteoarthritis (OA) is the commonest disease affecting hip joints and has high prevalence across various age groups [1,2]. Effusion is a hallmark of OA and could represent a potential target for therapy [3–5]. Positive correlations of effusion to clinical outcomes are not well established, partly due to variability in manual assessment. Voxel-based volume quantification could reduce this variability [6].Objectives:We examine the inter-observer agreement of manual assessment of voxel-based effusion volume from coronal STIR MRI sequences at two time points and examine the feasibility of using Artificial Intelligence (AI) for standalone volume assessment.Methods:Our algorithm is based on Mask R-CNN [7] and was trained on labeled effusion regions in MRI slices from 68 patients with hip osteoarthritis. For validation, 2 human readers measured effusion from MRI STIR sequences of 25 patients at baseline and at 8 weeks follow-up. AI was used to measure effusion volume as an independent reader. Agreement between human readers and AI was assessed using absolute difference in volume (DV), Coefficients of Variation (CoV) and intraclass correlation coefficient (ICC).Results:Effusion regions detected by AI closely correlated with manual segmentation (Figure 1) for all samples. Differences in volumes measured by each pair of readers are summarized in Table 1. Agreement was excellent between human readers (ICC=0.99) and for each reader vs AI (ICC = 0.85-0.87).Figure 1.Mask overlays of regions of joint fluid detected by human readers (green, column 2) and AI (red, column 3) from 3 different patients. Raw MRI images are shown in column 1.Table 1.Comparison of volumes measured in cubic millimeters and agreement between each pair of readers (with AI as the 3rd reader)Volumes measured by readersAgreement between reader pairsReaderOverall VolumeMean ± Standard DeviationReader PairDifference in VolumeMean ± Standard DeviationCoVICCReader 16943 ± 5845Reader 1-21127 ± 9000.210.99 [0.98, 1.0]Reader 27638 ± 5619Reader 1-AI3311 ±16430.350.87 [0.7, 0.94]AI11014 ± 4454Reader 2-AI4151 ± 49860.270.85 [0.66,0.94]Conclusion:Initial results of automatic effusion measurement using AI show high agreement with human experts. This has potential to reduce variability and save expert time in OA MRI assessment, and to lead to improved OA care.References:[1]Sharif B, Garner R, Hennessy D, Sanmartin C, Flanagan WM, Marshall DA. Productivity costs of work loss associated with osteoarthritis in Canada from 2010 to 2031. Osteoarthritis Cartilage. 2017 Feb;25(2):249–58.[2]Sharif B, Kopec J, Bansback N, Rahman MM, Flanagan WM, Wong H, et al. Projecting the direct cost burden of osteoarthritis in Canada using a microsimulation model. Osteoarthritis Cartilage. 2015 Oct;23(10):1654–63.[3]Loeuille D, Chary-Valckenaere I, Champigneulle J, Rat A-C, Toussaint F, Pinzano-Watrin A, et al. Macroscopic and microscopic features of synovial membrane inflammation in the osteoarthritic knee: correlating magnetic resonance imaging findings with disease severity. Arthritis Rheum. 2005 Nov;52(11):3492–501.[4]Fernandez-Madrid F, Karvonen RL, Teitge RA, Miller PR, An T, Negendank WG. Synovial thickening detected by MR imaging in osteoarthritis of the knee confirmed by biopsy as synovitis. Magn Reson Imaging. 1995;13(2):177–83.[5]Atukorala I, Kwoh CK, Guermazi A, Roemer FW, Boudreau RM, Hannon MJ, et al. Synovitis in knee osteoarthritis: a precursor of disease? Ann Rheum Dis. 2016 Feb;75(2):390–5.[6]Quinn-Laurin V, Thejeel B, Chauvin NA, Brandon TG, Weiss PF, Jaremko JL. Normal hip joint fluid volumes in healthy children of different ages, based on MRI volumetric quantitative measurement. Pediatr Radiol. 2020 Oct;50(11):1587–93.[7]He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. openaccess.thecvf.com; 2017. p. 2961–9.Acknowledgements:Jacob Jaremko is supported by the AHS Chair in Diagnostic Imaging at the University of Alberta. Medical Imaging Consultants (MIC) funds musculoskeletal radiology fellowships for Vanessa Quinn-Laurin at the University of Alberta, and provides Jacob Jaremko and Robert Lambert with protected academic time. Banafshe Felfeliyan is supported by an Alberta Innovates Graduate Student Scholarship for Data-Enabled Innovation.Disclosure of Interests:None declared.
APA, Harvard, Vancouver, ISO, and other styles
40

Syer, Tom, Pritesh Mehta, Michela Antonelli, Sue Mallett, David Atkinson, Sébastien Ourselin, and Shonit Punwani. "Artificial Intelligence Compared to Radiologists for the Initial Diagnosis of Prostate Cancer on Magnetic Resonance Imaging: A Systematic Review and Recommendations for Future Studies." Cancers 13, no. 13 (July 1, 2021): 3318. http://dx.doi.org/10.3390/cancers13133318.

Full text
Abstract:
Computer-aided diagnosis (CAD) of prostate cancer on multiparametric magnetic resonance imaging (mpMRI), using artificial intelligence (AI), may reduce missed cancers and unnecessary biopsies, increase inter-observer agreement between radiologists, and alleviate pressures caused by rising case incidence and a shortage of specialist radiologists to read prostate mpMRI. However, well-designed evaluation studies are required to prove efficacy above current clinical practice. A systematic search of the MEDLINE, EMBASE, and arXiv electronic databases was conducted for studies that compared CAD for prostate cancer detection or classification on MRI against radiologist interpretation and a histopathological reference standard, in treatment-naïve men with a clinical suspicion of prostate cancer. Twenty-seven studies were included in the final analysis. Due to substantial heterogeneities in the included studies, a narrative synthesis is presented. Several studies reported superior diagnostic accuracy for CAD over radiologist interpretation on small, internal patient datasets, though this was not observed in the few studies that performed evaluation using external patient data. Our review found insufficient evidence to suggest the clinical deployment of artificial intelligence algorithms at present. Further work is needed to develop and enforce methodological standards, promote access to large diverse datasets, and conduct prospective evaluations before clinical adoption can be considered.
APA, Harvard, Vancouver, ISO, and other styles
41

Tosun, Akif Burak, and S. Chakra Chennubhotla. "Abstract P6-04-12: Differential diagnoses of breast biopsies by spatial parametric modeling of histological structures and explainable AI." Cancer Research 83, no. 5_Supplement (March 1, 2023): P6–04–12—P6–04–12. http://dx.doi.org/10.1158/1538-7445.sabcs22-p6-04-12.

Full text
Abstract:
Abstract Background: Pathologists typically diagnose the breast tissue slides under a microscope by examining: (i) lumen and ductal morphology, (ii) nuclei size, shape, and spatial arrangement and their combinations, (iii) intraductal architecture, and (iv) textural properties. These features may be subtle and can overlap between diagnoses which contribute to inter- and intra-observer variability. We aim to mitigate this arbitrary nature of breast diagnoses with an exemplar-driven precision pathology pipeline based on spatial parametric modeling of histological structures. Methods: For our study, we consider a broad spectrum of breast biopsies including: (i) invasive breast cancer, (ii) three high-risk benign lesions: ductal carcinoma in-situ, atypical ductal hyperplasia (ADH), flat epithelial atypia (FEA), and (iii) three low-risk benign lesions: usual ductal hyperplasia, columnar cell change and Normal; where the risk is indicated by the relative chance of developing breast cancer. We build spatial parametric models for a dictionary of histological structures that pathologists frequently use (also documented in the standard reference book from WHO on the classification of tumors) in making complex diagnostic decisions. These models enable our precision pathology pipeline to simultaneously identify distinct exemplar images to account for inter-class heterogeneity, and learn the relative importance of lumen/ductal morphology (LD), intraductal structures including nuclei morphology and spatial arrangements (ID) and textural features (T) from automatically identified exemplar images in assigning diagnostic labels. In doing so, we assert that the assignment of relative importances to LD, ID, and T features is driven by similar looking ducts (‘exemplars’) which were previously encountered during pathology training or clinical practice. Results: We evaluated the inferential power of our exemplar-driven precision pathology pipeline on two separate breast core biopsy image datasets, i) dataset containing 4539 regions of interest (ROIs) images extracted from 387 whole slide images (WSIs, 40x), and ii) dataset containing 1237 ROI images extracted from 93 WSIs (20x). Our precision pathology pipeline shows significant improvement (~20%) in the overall classification performance compared to state-of-the-art black box deep learning methods (e.g., graphical neural networks) on both datasets. In particular, while our performance in detecting invasive lesions is comparable to baseline methods, we show a significant improvement (p&lt; 0.01) in detecting diagnostically important high-risk ADH and FEA ROIs compared to the baseline methods, where inter- and intra-observer variability is a problem. Conclusions: A key highlight of our method is in its ability to provide pathologist friendly diagnostic explanations without largely compromising on the classification performance. The strategy outlined in this work can be generalized to other tissue histologies from other organs as defined in the WHO Classification of Tumors books. Further, our approach can facilitate a communication platform between pathologists and computational scientists to interact and develop AI-driven algorithmic tools that can enhance patient care in a clinical setting. Our framework provides pathologist-friendly explanations paving the way for better, transparent, and trustworthy diagnostic tools. Differential diagnoses of breast biopsies Differential diagnoses of breast biopsies The precision pathology pipeline optimizes the identification of a broad spectrum of breast biopsies (invasive, DCIS, benign), including difficult borderline cases (e.g., ADH, FEA, etc.). It provides pathologist-friendly explanations integrated into a clinical workflow for better, transparent, and trustworthy diagnostic aid. This approach sddresses the limitations of standard black-box AI in building trust with pathologists. Citation Format: Akif Burak Tosun, S. Chakra Chennubhotla. Differential diagnoses of breast biopsies by spatial parametric modeling of histological structures and explainable AI [abstract]. In: Proceedings of the 2022 San Antonio Breast Cancer Symposium; 2022 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2023;83(5 Suppl):Abstract nr P6-04-12.
APA, Harvard, Vancouver, ISO, and other styles
42

Al Fryan, Latefa Hamad, Mahasin Ibrahim Shomo, and Malik Bader Alazzam. "Application of Deep Learning System Technology in Identification of Women’s Breast Cancer." Medicina 59, no. 3 (March 1, 2023): 487. http://dx.doi.org/10.3390/medicina59030487.

Full text
Abstract:
Background and Objectives: The classification of breast cancer is performed based on its histological subtypes using the degree of differentiation. However, there have been low levels of intra- and inter-observer agreement in the process. The use of convolutional neural networks (CNNs) in the field of radiology has shown potential in categorizing medical images, including the histological classification of malignant neoplasms. Materials and Methods: This study aimed to use CNNs to develop an automated approach to aid in the histological classification of breast cancer, with a focus on improving accuracy, reproducibility, and reducing subjectivity and bias. The study identified regions of interest (ROIs), filtered images with low representation of tumor cells, and trained the CNN to classify the images. Results: The major contribution of this research was the application of CNNs as a machine learning technique for histologically classifying breast cancer using medical images. The study resulted in the development of a low-cost, portable, and easy-to-use AI model that can be used by healthcare professionals in remote areas. Conclusions: This study aimed to use artificial neural networks to improve the accuracy and reproducibility of the process of histologically classifying breast cancer and reduce the subjectivity and bias that can be introduced by human observers. The results showed the potential for using CNNs in the development of an automated approach for the histological classification of breast cancer.
APA, Harvard, Vancouver, ISO, and other styles
43

Luo, Junren, Xiang Ji, Wei Gao, Wanpeng Zhang, and Shaofei Chen. "Goal Recognition Control under Network Interdiction Using a Privacy Information Metric." Symmetry 11, no. 8 (August 17, 2019): 1059. http://dx.doi.org/10.3390/sym11081059.

Full text
Abstract:
Goal recognition (GR) is a method of inferring the goals of other agents, which enables humans or AI agents to proactively make response plans. Goal recognition design (GRD) has been proposed to deliberately redesign the underlying environment to accelerate goal recognition. Along with the GR and GRD problems, in this paper, we start by introducing the goal recognition control (GRC) problem under network interdiction, which focuses on controlling the goal recognition process. When the observer attempts to facilitate the explainability of the actor’s behavior and accelerate goal recognition by reducing the uncertainty, the actor wants to minimize the privacy information leakage by manipulating the asymmetric information and delay the goal recognition process. Then, the GRC under network interdiction is formulated as one static Stackelberg game, where the observer obtains asymmetric information about the actor’s intended goal and proactively interdicts the edges of the network with a bounded resource. The privacy leakage of the actor’s actions about the real goals is quantified by a min-entropy information metric and this privacy information metric is associated with the goal uncertainty. Next in importance, we define the privacy information metric based GRC under network interdiction (InfoGRC) and the information metric based GRC under threshold network interdiction (InfoGRCT). After dual reformulating, the InfoGRC and InfoGRCT as bi-level mixed-integer programming problems, one Benders decomposition-based approach is adopted to optimize the observer’s optimal interdiction resource allocation and the actor’s cost-optimal path-planning. Finally, some experimental evaluations are conducted to demonstrate the effectiveness of the InfoGRC and InfoGRCT models in the task of controlling the goal recognition process.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Si-Wook, Hee-Uk Ye, Kyung-Jae Lee, Woo-Young Jang, Jong-Ha Lee, Seok-Min Hwang, and Yu-Ran Heo. "Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening." Diagnostics 11, no. 7 (June 28, 2021): 1174. http://dx.doi.org/10.3390/diagnostics11071174.

Full text
Abstract:
Hip joint ultrasonographic (US) imaging is the golden standard for developmental dysplasia of the hip (DDH) screening. However, the effectiveness of this technique is subject to interoperator and intraobserver variability. Thus, a multi-detection deep learning artificial intelligence (AI)-based computer-aided diagnosis (CAD) system was developed and evaluated. The deep learning model used a two-stage training process to segment the four key anatomical structures and extract their respective key points. In addition, the check angle of the ilium body balancing level was set to evaluate the system’s cognitive ability. Hence, only images with visible key anatomical points and a check angle within ±5° were used in the analysis. Of the original 921 images, 320 (34.7%) were deemed appropriate for screening by both the system and human observer. Moderate agreement (80.9%) was seen in the check angles of the appropriate group (Cohen’s κ = 0.525). Similarly, there was excellent agreement in the intraclass correlation coefficient (ICC) value between the measurers of the alpha angle (ICC = 0.764) and a good agreement in beta angle (ICC = 0.743). The developed system performed similarly to experienced medical experts; thus, it could further aid the effectiveness and speed of DDH diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
45

Marty, M., D. Gaye, P. Perez, C. Auder, M. L. Nunes, A. Ferriere, M. Haissaguerre, and A. Tabarin. "Diagnostic accuracy of computed tomography to identify adenomas among adrenal incidentalomas in an endocrinological population." European Journal of Endocrinology 178, no. 5 (May 2018): 439–46. http://dx.doi.org/10.1530/eje-17-1056.

Full text
Abstract:
Context The recent recommendations of the European Endocrine Society states that the performance of computed tomography (CT) to characterize ‘true' adrenal incidentalomas (AIs) remains debatable. Objective To determine relevant thresholds for usual CT parameters for the diagnosis of benign tumors using robust reference standard among a large series of ‘true’ AIs recruited in an endocrinological setting. Design Retrospective study of 253 AIs in 233 consecutive patients explored in a single university hospital: 183 adenomas, 33 pheochromocytomas, 23 adrenocortical carcinomas, 5 other malignant tumors and 9 other benign tumors. Reference standard was histopathology in 118 AIs, biological diagnosis of pheochromocytoma in 2 AIs and size stability after at least 1 year of follow-up in 133 AIs. Methods Sensitivity, specificity and positive and negative predictive values were estimated for various thresholds of size, unenhanced attenuation (UA), relative and absolute wash-out (RPW, APW) of contrast media. 197 scans were reviewed independently in a blinded fashion by two expert radiologists to assess inter-observer reproducibility of measurements. Results Criteria associated with a 100% positive predictive value for the diagnosis of benign AI were: a combination of size and UA: 30 mm and 20 HU or 40 mm and 15 HU, respectively; RPW >53%; and APW >78%. Non-adenomatous AIs with rapid contrast wash-out were exclusively benign pseudocysts and pheochromocytomas, suggesting that classical thresholds of 60% and 40% for APW and RPW, respectively, can be safely used for patients with normal metanephrine values. Inter-observer reproducibility of all parameters was excellent (intra-class correlation coefficients: 0.96–0.99). Conclusions Our study, the largest conducted in AIs recruited in an endocrinological setting, suggests safe thresholds for quantitative CT parameters to avoid false diagnoses of benignity.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Xu, Hongyan Xing, Xinyuan Ji, and Zenghui Wang. "Smart Localization of Thunderstorm Charge for Human 4IR Applications." Mobile Information Systems 2021 (April 16, 2021): 1–15. http://dx.doi.org/10.1155/2021/5548350.

Full text
Abstract:
The relationship between smart devices and human beings is one of the research hotspots of the Fourth Industrial Revolution (4IR). In this regard, we explored the practical relationship between the 3D electric field components measured by the smart 3D atmospheric electric field apparatus (AEFA) and the thunderstorm activity from the perspective of the observer. Especially, in the application of AEFA, a smart calibration method is proposed to solve the problem of inconvenient thunderstorm data acquisition. Firstly, in order to obtain the thunderstorm charge position from the observation angle of the apparatus, this paper establishes a 3D electric field measurement model. According to the mirror method theory, we further obtain the charge potential distribution at AEFA. Then, the electric field components are derived by using the potential distribution formula with permittivity. In addition, based on the vector relation of the model, the thunderstorm charge azimuth and elevation angles are obtained. Finally, after the establishment of a new coordinate system, the calibration of charge localization is carried out, based on the observation point. Meanwhile, a preliminary solution is given to the problem that the elevation of the apparatus position affects the localization performance. Results show that the method matches the data of radar map and microphone array, which reflects the advantages of the method. Besides, this method can be used not only in sound source localization but also in AI thunderstorm monitoring system to realize a big data net observation.
APA, Harvard, Vancouver, ISO, and other styles
47

Solitano, Virginia, Ferdinando D’Amico, Mariangela Allocca, Gionata Fiorino, Alessandra Zilli, Laura Loy, Daniela Gilardi, et al. "Rediscovering histology: what is new in endoscopy for inflammatory bowel disease?" Therapeutic Advances in Gastroenterology 14 (January 2021): 175628482110056. http://dx.doi.org/10.1177/17562848211005692.

Full text
Abstract:
The potential of endoscopic evaluation in the management of inflammatory bowel diseases (IBD) has undoubtedly grown over the last few years. When dealing with IBD patients, histological remission (HR) is now considered a desirable target along with symptomatic and endoscopic remission, due to its association with better long-term outcomes. Consequently, the ability of endoscopic techniques to reflect microscopic findings in vivo without having to collect biopsies has become of upmost importance. In this context, a more accurate evaluation of inflammatory disease activity and the detection of dysplasia represent two mainstay targets for IBD endoscopists. New diagnostic technologies have been developed, such as dye-less chromoendoscopy, endomicroscopy, and molecular imaging, but their real incorporation in daily practice is not yet well defined. Although dye-chromoendoscopy is still recommended as the gold standard approach in dysplasia surveillance, recent research questioned the superiority of this technique over new advanced dye-less modalities [narrow band imaging (NBI), Fuji intelligent color enhancement (FICE), i-scan, blue light imaging (BLI) and linked color imaging (LCI)]. The endoscopic armamentarium might also be enriched by new video capsule endoscopy for monitoring disease activity, and high expectations are placed on the application of artificial intelligence (AI) systems to reduce operator-subjectivity and inter-observer variability. The goal of this review is to provide an updated insight on contemporary knowledge regarding new endoscopic techniques and devices, with special focus on their role in the assessment of disease activity and colorectal cancer surveillance.
APA, Harvard, Vancouver, ISO, and other styles
48

Murray, William. "Intelligent Tutoring Systems for Commercial Games: The Virtual Combat Training Center Tutor and Simulation." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2, no. 1 (September 29, 2021): 66–71. http://dx.doi.org/10.1609/aiide.v2i1.18748.

Full text
Abstract:
Game manuals and tutorial scenarios are insufficient for new players to learn games of deep complexity such as highly realistic tactical simulations of modern battlefields. Adding post-game after-action reviews improves the situation, but these typically do not provide guidance during the mission and tend to focus on quantitative feedback, rather than specifics about what the player did wrong and how to improve. Intelligent tutoring system (ITS) technology provides a higher level of interactivity and a more specific qualitative analysis to guide players during game play. This use of an AI technology is demonstrated with the integration of an ITS component with the tactical simulation Armored Task Force (ATF) resulting in a combined system called the the Virtual Combat Training Center (V-CTC). V-CTC simulates the Army's combat training center at Fort Irwin and its instructors, called observer / controllers. The ATF game itself was modified to send an event stream over TCP-IP sockets to the ITS component, which interprets the events and acts accordingly. V-CTC was originally intended for a military context: either classroom use, field instruction, or embedded deployment. However, in non-military games, tutors (or non-player characters acting in that role) may well enhance the gaming experience of players. Such players might otherwise become frustrated with learning very challenging games, or simply fail to appreciate the tactical possibilities and depth of strategy possible in a well-designed game.
APA, Harvard, Vancouver, ISO, and other styles
49

Ravnborgl, M., M. Grønbech-Jensen, and A. Jønsson. "The MS impairment scale: a pragmatic approach to the assessment of impairment in patients with multiple sclerosis." Multiple Sclerosis Journal 3, no. 1 (February 1997): 31–41. http://dx.doi.org/10.1177/135245859700300104.

Full text
Abstract:
We developed a measurement scale for assessment of impairment in MS patients (MSIS) in accordance with the recommandations of WHO. The items were kept close to a standard neurologic examination, and a short battery of cognitive tests was added. Normality was assigned to the value, zero, and the theoretical maximum score was 204. Two-hundred and ten multiple sclerosis (MS) patients were rated by one neurologist on the MS Impairment Scale (MSIS), the Extended Disability Status Scale (EDSS), and the Ambulation Index (AI). The median MSIS score was 52 (5- 147), the median EDSS 6.5 (1 - 9.5), and the median Al 5 (1-5). The relation between the MSIS scores and the EDSS was best described by an exponential function (non-linear regression coefficient, R=0.87). Sixty-two of the patients were reexamined and rated by another neurologist The interrater reliability coefficient (R) of the MSIS was 0.95, of the EDSS 0.91, and of the Al 0.94. Forty patients were examined twice by the first neurologist The intrarater reliability coefficient was 0.97 for the MSIS, 0.95 for the EDSS, and 0.98 for the Al. The MSIS is easy to use and is robust to observer dissimilarities. It has a monomodal univariate distribution and has a better discriminatory power than the EDSS, especially in the EDSS range 6 - 9, while the interrater reliability of the MSIS is at least as good as that of the EDSS and the Al.
APA, Harvard, Vancouver, ISO, and other styles
50

Ingargiola, Giorgio P., and Judith D. Wilson. "The introductory undergraduate AI course as observed on WWW." ACM SIGART Bulletin 6, no. 3 (July 1995): 2–6. http://dx.doi.org/10.1145/208628.208629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography