Artículos de revistas sobre el tema "Intra-manual processing"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Intra-manual processing.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Intra-manual processing".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Kubicek, Claudia y Gudrun Schwarzer. "On the Relation Between Infants’ Spatial Object Processing and Their Motor Skills". Journal of Motor Learning and Development 6, s1 (abril de 2018): S6—S23. http://dx.doi.org/10.1123/jmld.2016-0062.

Texto completo
Resumen
From birth, infants encounter an environment full of objects and learn rapidly about their spatial characteristics. According to Newcombe, Uttal, and Sauter (2013), spatial development includes (1) the development of intraobject representations with the ability to transform them by mental rotation, and (2) the development of interobject representations with the ability to find and predict certain object locations. Infants’ remarkable improvements of these two strands of spatial object processing raise the major question of which factors may drive them. In this article, we discuss the extent to which infants’ development of intra- and interobject representations is related to their emerging motor skills. In particular, we provide a review on how far infants’ development of mental object rotation ability and their ability to localize objects are related to their manual object exploration and locomotion skills. We document a bulk of evidence suggesting such a link between infants’ motor development and their spatial object processing and also discuss and critically reconsider the implications of these studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Dangol, Raksha, Prashna Adhikari, Pranjal Dahal y Hrizu Sharma. "Short Updates- Machine Learning Based News Summarizer". Journal of Advanced College of Engineering and Management 8, n.º 2 (23 de junio de 2023): 15–25. http://dx.doi.org/10.3126/jacem.v8i2.55939.

Texto completo
Resumen
Automated Text Summarization is becoming important due to the vast amount of data being generated. Manual processing of documents is tedious, mostly due to the absence of standards. Therefore, there is a need for a mechanism to reduce text size, structure it, and make it readable for users. Natural Language Processing (NLP) is critical for analyzing large amounts of unstructured, text-heavy data. This project aims to address concerns with extractive and abstractive text summarization by introducing a new neural network model that deals with repetitive and incoherent phrases in longer documents. The model incorporates a novel Seq2Seq architecture that enhances the standard attentional model with an intra-attention mechanism. Additionally, a new training method that combines supervised word prediction and reinforcement learning is employed. The model utilizes a hybrid pointer-generator network, which distinguishes it from the standard encoder-decoder model. This approach produces higher quality summaries than existing models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Liu, Yung-Chun, Hsin-Chen Chen, Hui-Hsuan Shih, Tai-Hua Yang, Hsiao-Bai Yang, Dee-Shan Yang, Fong-Chin Su y Yung-Nien Sun. "Computer Aided Quantification of Pathological Features for Flexor Tendon Pulleys on Microscopic Images". Computational and Mathematical Methods in Medicine 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/914124.

Texto completo
Resumen
Quantifying the pathological features of flexor tendon pulleys is essential for grading the trigger finger since it provides clinicians with objective evidence derived from microscopic images. Although manual grading is time consuming and dependent on the observer experience, there is a lack of image processing methods for automatically extracting pulley pathological features. In this paper, we design and develop a color-based image segmentation system to extract the color and shape features from pulley microscopic images. Two parameters which are the size ratio of abnormal tissue regions and the number ratio of abnormal nuclei are estimated as the pathological progression indices. The automatic quantification results show clear discrimination among different levels of diseased pulley specimens which are prone to misjudgments for human visual inspection. The proposed system provides a reliable and automatic way to obtain pathological parameters instead of manual evaluation which is with intra- and interoperator variability. Experiments with 290 microscopic images from 29 pulley specimens show good correspondence with pathologist expectations. Hence, the proposed system has great potential for assisting clinical experts in routine histopathological examinations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lee, Juhwan, Justin N. Kim, Lia Gomez-Perez, Yazan Gharaibeh, Issam Motairek, Gabriel T. R. Pereira, Vladislav N. Zimin et al. "Automated Segmentation of Microvessels in Intravascular OCT Images Using Deep Learning". Bioengineering 9, n.º 11 (3 de noviembre de 2022): 648. http://dx.doi.org/10.3390/bioengineering9110648.

Texto completo
Resumen
Microvessels in vascular plaque are associated with plaque progression and are found in plaque rupture and intra-plaque hemorrhage. To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was performed using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,θ) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71 ± 0.10 and pixel-wise sensitivity/specificity of 87.7 ± 6.6%/99.8 ± 0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5 ± 0.3%, specificity of 98.8 ± 1.0%, and accuracy of 99.1 ± 0.5%. The classification step eliminated the majority of residual false positives and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared with 730 from manual analysis, representing a 4.4% difference. When compared with the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jefferies-Sewell, Kiri, Samuel R. Chamberlain, Naomi A. Fineberg y Keith R. Laws. "Cognitive dysfunction in body dysmorphic disorder: new implications for nosological systems and neurobiological models". CNS Spectrums 22, n.º 1 (30 de noviembre de 2016): 51–60. http://dx.doi.org/10.1017/s1092852916000468.

Texto completo
Resumen
IntroductionBody dysmorphic disorder (BDD) is a debilitating disorder, characterized by obsessions and compulsions relating specifically to perceived appearance, and which has been newly classified within the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) Obsessive-Compulsive and Related Disorders grouping. Until now, little research has been conducted into the cognitive profile of this disorder.MethodsParticipants with BDD (n=12) and participants without BDD (n=16) were tested using a computerized neurocognitive battery investigating attentional set-shifting (Intra/Extra Dimensional Set Shift Task), decision-making (Cambridge Gamble Task), motor response-inhibition (Stop-Signal Reaction Time Task), and affective processing (Affective Go-No Go Task). The groups were matched for age, IQ, and education.ResultsIn comparison to controls, patients with BDD showed significantly impaired attentional set-shifting, abnormal decision-making, impaired response inhibition, and greater omission and commission errors on the emotional processing task.ConclusionDespite the modest sample size, our results showed that individuals with BDD performed poorly compared to healthy controls on tests of cognitive flexibility, reward and motor impulsivity, and affective processing. Results from separate studies in OCD patients suggest similar cognitive dysfunction. Therefore, these findings are consistent with the reclassification of BDD alongside OCD. These data also hint at additional areas of decision-making abnormalities that might contribute specifically to the psychopathology of BDD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pradita Nuramalia, Putri, Suryono Suryono y Edy Susanto. "Automatic Contouring Analysis of Lung Cancer for Radiotherapy Radiation Planning from Ct-Simulator Image". Asian Journal of Healthy and Science 2, n.º 10 (23 de octubre de 2023): 423–28. http://dx.doi.org/10.58631/ajhs.v2i10.81.

Texto completo
Resumen
Accurate determination of tumor and radiotherapy target volume is crucial to prevent local and regional failure in lung cancer. Radiotherapy planning considers critical normal tissue structures or Organs at Risk (OAR). While manual contouring is the gold standard in radiotherapy planning, it is still susceptible to intra- and inter-observer variation. A digital image processing system capable of automatically and semi-automatically determining tumor targets and OARs through computer programming can assist in the contouring process. The study aims to create a digital contouring computer program to aid in planning radiotherapy irradiation for lung cancer cases, guided by CT-Simulator image guidance. The research involved contouring CT-Simulator images of lung cancer cases using automatic segmentation, semi-automatic segmentation, and a combination of automatic, semi-automatic, and manual segmentation. Contour accuracy was assessed by a radiation oncologist. Automatic segmentation showed high accuracy for lungs (>95%), moderate for tumor targets (72.22%) and the heart (73.33%), but low accuracy for the esophagus (34.44%) and spinal cord (36.67%). Semi-automatic segmentation achieved high accuracy for the esophagus (96.67%) and spinal cord (95.56%), and moderate accuracy for tumor targets (73.33%), heart (66.67%), right lung (66.25%), and left lung (72.94%). The combination of automatic, semi-automatic, and manual segmentation resulted in high accuracy for tumor targeting and OARs (>95%). Automatic or semi-automatic segmentation using location and gray level classification methods for lung cancer cases did not produce optimal results. However, a contouring program combining automatic, semi-automatic, and manual segmentation proved more effective, balancing human interaction and automation in the lung cancer contouring process. This digital contouring program offers valuable support for radiation oncologists, potentially leading to improved treatment outcomes for lung cancer patients
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Giannini, Paolo, Giulia Bassani, Carlo Alberto Avizzano y Alessandro Filippeschi. "Wearable Sensor Network for Biomechanical Overload Assessment in Manual Material Handling". Sensors 20, n.º 14 (11 de julio de 2020): 3877. http://dx.doi.org/10.3390/s20143877.

Texto completo
Resumen
The assessment of risks due to biomechanical overload in manual material handling is nowadays mainly based on observational methods in which an expert rater visually inspects videos of the working activity. Currently available sensing wearable technologies for motion and muscular activity capture enables to advance the risk assessment by providing reliable, repeatable, and objective measures. However, existing solutions do not address either a full body assessment or the inclusion of measures for the evaluation of the effort. This article proposes a novel system for the assessment of biomechanical overload, capable of covering all areas of ISO 11228, that uses a sensor network composed of inertial measurement units (IMU) and electromyography (EMG) sensors. The proposed method is capable of gathering and processing data from three IMU-based motion capture systems and two EMG capture devices. Data are processed to provide both segmentation of the activity and ergonomic risk score according to the methods reported in the ISO 11228 and the TR 12295. The system has been tested on a challenging outdoor scenario such as lift-on/lift-off of containers on a cargo ship. A comparison of the traditional evaluation method and the proposed one shows the consistency of the proposed system, its time effectiveness, and its potential for deeper analyses that include intra-subject and inter-subjects variability as well as a quantitative biomechanical analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Karbaschi, Mahsa, Yunhee Ji, Mubarak A. Mujawar, Mario Mendoza, Juan S. Marquez, Apurva Sonawane, Pratikkumar Shah, Chris Ross, Shekhar Bhansali y Marcus S. Cooke. "Development of a Novel, Automated High-Throughput Device for Performing the Comet Assay". International Journal of Molecular Sciences 24, n.º 8 (13 de abril de 2023): 7187. http://dx.doi.org/10.3390/ijms24087187.

Texto completo
Resumen
A comet assay is a trusted and widely used method for assessing DNA damage in individual eukaryotic cells. However, it is time-consuming and requires extensive monitoring and sample manipulation by the user. This limits the throughput of the assay, increases the risk of errors, and contributes to intra- and inter-laboratory variability. Here, we describe the development of a device which automates high throughput sample processing for a comet assay. This device is based upon our patented, high throughput, vertical comet assay electrophoresis tank, and incorporates our novel, patented combination of assay fluidics, temperature control, and a sliding electrophoresis tank to facilitate sample loading and removal. Additionally, we demonstrated that the automated device performs at least as well as our “manual” high throughput system, but with all the advantages of a fully “walkaway” device, such as a decreased need for human involvement and a decreased assay run time. Our automated device represents a valuable, high throughput approach for reliably assessing DNA damage with the minimal operator involvement, particularly if combined with the automated analysis of comets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Qureshi, Imran, Jun Ma y Qaisar Abbas. "Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy". Symmetry 11, n.º 6 (3 de junio de 2019): 749. http://dx.doi.org/10.3390/sym11060749.

Texto completo
Resumen
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

González-Naranjo, Justa Elizabeth, Maydelin Alfonso-Alfonso, Daymet Grass-Fernandez, Lilia María Morales-Chacón, Ivón Pedroso-Ibáñez, Yordanka Ricardo-de la Fe y Arnoldo Padrón-Sánchez. "Analysis of Sleep Macrostructure in Patients Diagnosed with Parkinson’s Disease". Behavioral Sciences 9, n.º 1 (8 de enero de 2019): 6. http://dx.doi.org/10.3390/bs9010006.

Texto completo
Resumen
Patients diagnosed with Parkinson’s disease present sleep disorders with a higher frequency than the general population. The sleep architecture in these patients shows variations with respect to the normal population, so in this work it was decided to investigate the characteristics of the macroarchitecture of sleep in patients diagnosed with Parkinson’s disease. A polysomnographic study was carried out on 77 patients diagnosed with Parkinson’s disease. All the studies were processed according to the AASM Manual for the Scoring of Sleep and Associated Events v.2.2, and to the criteria of the International Classification of Sleep Disorders 3rd ed. (2014). Processing was carried out using descriptive statistics, as well as non-parametric analysis for comparison between cases and controls. The group of patients showed significant reductions of the N2, N3, and REM sleep stages when compared with a control group, as well as a significant increase in intra-sleep wakefulness. The number of REM–NoREM sleep cycles and sleep efficiency showed marked reduction compared to the control group. There was a statistically significant difference in the macroarchitecture of sleep between patients diagnosed with Parkinson’s disease and healthy controls.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Agrawal, Ritu, Manisha Sharma y Bikesh Kumar Singh. "Segmentation of Brain Tumour Based on Clustering Technique: Performance Analysis". Journal of Intelligent Systems 28, n.º 2 (24 de abril de 2019): 291–306. http://dx.doi.org/10.1515/jisys-2017-0027.

Texto completo
Resumen
AbstractManual detection and analysis of brain tumours is an exhaustive and time-consuming process. Further, it is subject to intra-observer and inter-observer variabilities. Automated brain tumour segmentation and analysis has thus gained much attention in recent years. However, the existing segmentation techniques do not meet the requirements of real-time use due to limitations posed by poor image quality and image complexity. This article proposes a hybrid approach for image segmentation by combining biorthogonal wavelet transform, skull stripping, fuzzy c-means threshold clustering, Canny edge detection, and morphological operations. Biorthogonal wavelet transform and skull stripping are essential pre-processing steps for analysis of brain images. Initially, biorthogonal wavelet transform is used to remove impulsive noise and skull stripping is employed to eliminate non-cerebral tissue regions from the acquired images, followed by segmentation using fuzzy c-means threshold clustering, Canny edge detection, and morphological processing. The performance of the proposed automated system is tested on standard datasets using performance measures such as Jaccard index, Dice similarity coefficient, execution time, and entropy. The proposed method achieves a Jaccard index and Dice similarity coefficient of 0.886 and 0.935, respectively, which indicate better overlap between the automated segmentation method and manual segmentation method performed by an expert radiologist. The average execution time and average entropy values obtained are 1.001 s and 0.202, respectively. The results obtained are discussed in view of some reported studies in terms of execution time and tumour area. Further work is needed to evaluate the proposed method in routine clinical practice and its effect on radiologists’ performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Uelker, B., Ian Cushman, T. Dudek, Dr Matthias Herkert y Dr Cyril E. Geacintov. "High Sensitivity Hepcidin-25 Bioactive Elisas: Manual and Fully Automated System for the Quantification of Hepcidin-25 in Human Serum and Plasma". Blood 128, n.º 22 (2 de diciembre de 2016): 4820. http://dx.doi.org/10.1182/blood.v128.22.4820.4820.

Texto completo
Resumen
Abstract Since its discovery as the master regulator of iron homeostasis, hepcidin has been an exciting area of research with far reaching clinical and therapeutic applications. The bioactive peptide Hepcidin-25 is generated predominantly in the liver by proteolytic cleavage of the C-terminal 25 amino acids of prohepcidin. Subsequent N-terminal processing of Hepcidin-25 results in smaller peptides of 20-24 amino acids that show greatly reduced activity and accumulate in the urine. Hepcidin regulates iron levels by down regulating ferroportin, the major cellular iron exporter in the membrane of macrophages, hepatocytes, and the basolateral site of enterocytes. Hepcidin-25 induces the internalization and degradation of ferroportin, resulting in increased intracellular iron stores, decreased dietary iron absorption, and decreased circulating iron concentrations. Hepatocellular hepcidin synthesis decreases under conditions of increased demand for circulating iron like iron deficiency, hypoxia, anemia, and erythropoiesis. In contrast, hepcidin synthesis is induced by inflammation and infection. Measuring serum levels of Hepcidin-25 has proven to be valuable in identifying and differentiating specific disease conditions. Hepcidin deficiency causes hereditary hemochromatosis, characterized by body iron overload that may progress to liver cirrhosis. In addition, low Hepcidin-25 concentration can be induced by iron loading anemias and chronic hepatitis C. In contrast, high Hepcidin-25 levels have been found in iron-refractory iron-deficiency anemia, during infection, chronic kidney disease. There have reported links of hepcidin dysregulation in cancer and other disease states. This exciting area of biology has many potential important applications. To further advance the field of hepcidin research and clinical application two new assays have been developed, the manual high sensitive DRG Hepcidin-25 assay and the fully automated Hepcidin 25 (bioactive) which is available on the DRG Hybrid XL. Both assays are colorimetric solid phase enzyme-linked immunosorbent assays (ELISA) based on the competitive binding of hepcidin in the sample and biotinylated hepcidin to immobilized anti-hepcidin antibody, followed by the detection with a Streptavidin-HRP conjugate. The DRG Hepcidin-25 assay allows the quantitative determination of Hepcidin-25 covering a measuring range form 0.15-81.0 ng/mL and the total assay time for the manual assay is 1.5 hours. Serum and plasma (EDTA, heparin, Citrate) can be used for both the manual and automated assays. The analytical sensitivity of the assay is 0.153 ng/ml. The test shows good reproducibility with an intra-assay precision of 6.97% and an inter-assay precision of 12.0%. The recovery was determined to be 97.3%. Linear dilution gave an excellent overall recovery of 97.9% (mean of 3 samples, each diluted 4-fold with dilution buffer; range from 89.1-105.2%). We found no matrix interference with haemoglobin and triglycerides (up to 7.5 mg/mL). Inter-Lot precision was 7.69% (mean of 3 samples measured with 3 lots in 6 determinations). The assay also shows a good correlation to mass spectrometry (y=1.013x +1.3865, r=0.9678). Benefits of the new assay are a very straight forward procedure with ready-to-use reagents, no shaking, high sensitivity, and a short total assay time of 1.5 hours. In addition to the highly acclaimed manual assay a new fully automated Hepcidin assay has been developed for the DRG Hybrid XL. The fully automated Hepcidin 25 assay time is 2 hours. The dynamic range is 1.67 ng/mL - 81 ng/ml. Intra assay precision is 2.6% and inter assay precision is 13.7%. Recovery averages 100.6% and linearity 100.1%. Both assays are excellent tools for detecting bioactive Hepcdin-25 in human samples and advancing this exciting field of research. Disclosures No relevant conflicts of interest to declare.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Weichsel, Loris, Alexander Giesen, Florian André, Matthias Renker, Stefan Baumann, Philipp Breitbart, Meinrad Beer et al. "Comparison of Two Contemporary Quantitative Atherosclerotic Plaque Assessment Tools for Coronary Computed Tomography Angiography: Single-Center Analysis and Multi-Center Patient Cohort Validation". Diagnostics 14, n.º 2 (9 de enero de 2024): 154. http://dx.doi.org/10.3390/diagnostics14020154.

Texto completo
Resumen
Background: Coronary computed tomography angiography (CCTA) provides non-invasive quantitative assessments of plaque burden and composition. The quantitative assessment of plaque components requires the use of analysis software that provides reproducible semi-automated plaque detection and analysis. However, commercially available plaque analysis software can vary widely in the degree of automation, resulting in differences in terms of reproducibility and time spent. Aim: To compare the reproducibility and time spent of two CCTA analysis software tools using different algorithms for the quantitative assessment of coronary plaque volumes and composition in two independent patient cohorts. Methods: The study population included 100 patients from two different cohorts: 50 patients from a single-center (Siemens Healthineers, SOMATOM Force (DSCT)) and another 50 patients from a multi-center study (5 different > 64 slice CT scanner types). Quantitative measurements of total calcified and non-calcified plaque volume of the right coronary artery (RCA), left anterior descending (LAD), and left circumflex coronary artery (LCX) were performed on a total of 300 coronaries by two independent readers, using two different CCTA analysis software tools (Tool #1: Siemens Healthineers, syngo.via Frontier CT Coronary Plaque Analysis and Tool #2: Siemens Healthineers, successor CT Coronary Plaque Analysis prototype). In addition, the total time spent for the analysis was recorded with both programs. Results: The patients in cohorts 1 and 2 were 62.8 ± 10.2 and 70.9 ± 11.7 years old, respectively, 10 (20.0%) and 35 (70.0%) were female and 34 (68.0%) and 20 (40.0%), respectively, had hyperlipidemia. In Cohort #1, the inter- and intra-observer variabilities for the assessment of plaque volumes per patient for Tool #1 versus Tool #2 were 22.8%, 22.0%, and 26.0% versus 2.3%, 3.9%, and 2.5% and 19.7%, 21.4%, and 22.1% versus 0.2%, 0.1%, and 0.3%, respectively, for total, noncalcified, and calcified lesions (p < 0.001 for all between Tools #1 and 2 both for inter- and intra-observer). The inter- and intra-observer variabilities using Tool #2 remained low at 2.9%, 2.7%, and 3.0% and 3.8%, 3.7%, and 4.0%, respectively, for total, non-calcified, and calcified lesions in Cohort #2. For each dataset, the median processing time was higher for Tool #1 versus Tool #2 (459.5 s IQR = 348.0–627.0 versus 208.5 s; IQR = 198.0–216.0) (p < 0.001). Conclusion: The plaque analysis Tool #2 (CT-guided PCI) encompassing a higher degree of automated support required less manual editing, was more time-efficient, and showed a higher intra- and inter-observer reproducibility for the quantitative assessment of plaque volumes both in a representative single-center and in a multi-center validation cohort.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Feld, Geoffrey K., Melvin Lye, Christoph Eberle, Ann Wang y Chyan Ying Ke. "Semi and fully automated immunostaining sample preparation platforms improve live leukocyte recovery, reproducibility, and flow cytometry data quality". Journal of Immunology 208, n.º 1_Supplement (1 de mayo de 2022): 173.05. http://dx.doi.org/10.4049/jimmunol.208.supp.173.05.

Texto completo
Resumen
Abstract Limited innovation in automated cell and organelle sample preparation methodology limits the effectiveness of modern analytical methods, such as single-cell ‘omics, flow and mass cytometry. These techniques traditionally rely on manual centrifugation-based protocols for cell washing and suspension preparation, hampering researchers’ access to the reproducibility and scalability benefits of automation. We have developed a suite of cell suspension preparation systems that enable semi and full automation of cell washing protocols. These Laminar Wash™ technologies robustly, gently, and efficiently remove debris, dead cells, and unbound reagent using laminar flow and liquid handling robotics, rather than turbulent and harsh pelleting-plus-pipetting methods. Adaptation of standard protocols to Laminar Wash automation typically improves repetitive immunostaining processes and workflows, in terms of reduced hands-on time and inter- and intra-operator variability. We demonstrate the superior live cell retention and reproducibility of Laminar Wash over centrifugation in processing murine and humanized mouse peripheral blood mononuclear cells (PBMCs) and tumor infiltrating lymphocytes (TILs) for flow cytometry. Furthermore, we show how Laminar Wash improves flow cytometry data quality, in terms of debris removal and separation of immune cell subsets for both PBMCs and TILs. Overall, these results show how Laminar Wash methodology assists in standardizing sample preparation for cytometric analysis, an important and unmet need in cancer immunotherapy discovery and manufacturing workflows.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Crawford, Maleah. "THE PARALLEL BETWEEN ARTIFICIAL INTELLIGENCE AND NEUROSURGERY". International Journal of Engineering Applied Sciences and Technology 8, n.º 4 (1 de agosto de 2023): 163–70. http://dx.doi.org/10.33564/ijeast.2023.v08i04.022.

Texto completo
Resumen
The prowess of neurosurgeons emerges from decades of holistic training centered on technical skills, phasic clinical data collection, decision making, care and recovery. Modern research has opened a discussion regarding a remarkable surge in the importance of artificial intelligence (AI) in neurosurgery and its foundational procedures. AI has proven itself to be an invaluable asset in neurosurgery by complementing the aforementioned expertise of neurosurgeons to provide exceptional patient care throughout the pre-operative, intra-operative and post-operative stages of neurosurgery by enhancing diagnostic and prognostic outcomes in clinical treatment and mitigating the pressures associated with manual decision making taken on by neurosurgeons to improve patient outcomes. AI also contributes to the production, processing and storage of clinical and experimental data and has been observed to decrease risks associated with neuroanaesthesiology. The implementation of AI in neurosurgery can globalize high-quality healthcare with central focus on developing nations and thereby reduce the costs associated with surgical procedures. Additionally, mutualism between AI and neurosurgery can be facilitated, where both fields lead neurosurgeons and AI engineers into further exploration and more complex procedures to possibly re-define traditional methods in AI and neurosurgery and ultimately solve healthcare-related global issues. The aim of this review is to explore the capacity of AI in aspects of neurosurgery and investigate the significance of its contributions with regards to the capabilities of human workers
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Ziliani, Matteo, Stephen Parkes, Ibrahim Hoteit y Matthew McCabe. "Intra-Season Crop Height Variability at Commercial Farm Scales Using a Fixed-Wing UAV". Remote Sensing 10, n.º 12 (11 de diciembre de 2018): 2007. http://dx.doi.org/10.3390/rs10122007.

Texto completo
Resumen
Monitoring the development of vegetation height through time provides a key indicator of crop health and overall condition. Traditional manual approaches for monitoring crop height are generally time consuming, labor intensive and impractical for large-scale operations. Dynamic crop heights collected through the season allow for the identification of within-field problems at critical stages of the growth cycle, providing a mechanism for remedial action to be taken against end of season yield losses. With advances in unmanned aerial vehicle (UAV) technologies, routine monitoring of height is now feasible at any time throughout the growth cycle. To demonstrate this capability, five digital surface maps (DSM) were reconstructed from high-resolution RGB imagery collected over a field of maize during the course of a single growing season. The UAV retrievals were compared against LiDAR scans for the purpose of evaluating the derived point clouds capacity to capture ground surface variability and spatially variable crop height. A strong correlation was observed between structure-from-motion (SfM) derived heights and pixel-to-pixel comparison against LiDAR scan data for the intra-season bare-ground surface (R2 = 0.77 − 0.99, rRMSE = 0.44% − 0.85%), while there was reasonable agreement between canopy comparisons (R2 = 0.57 − 0.65, rRMSE = 37% − 50%). To examine the effect of resolution on retrieval accuracy and processing time, an evaluation of several ground sampling distances (GSD) was also performed. Our results indicate that a 10 cm resolution retrieval delivers a reliable product that provides a compromise between computational cost and spatial fidelity. Overall, UAV retrievals were able to accurately reproduce the observed spatial variability of crop heights within the maize field through the growing season and provide a valuable source of information with which to inform precision agricultural management in an operational context.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

FitzGerald, Stephen P., John V. Lamont, Robert I. McConnell y El O. Benchikh. "Development of a High-Throughput Automated Analyzer Using Biochip Array Technology". Clinical Chemistry 51, n.º 7 (1 de julio de 2005): 1165–76. http://dx.doi.org/10.1373/clinchem.2005.049429.

Texto completo
Resumen
Abstract Background: Use of protein array technology over conventional assay methods has advantages that include simultaneous detection of multiple analytes, reduction in sample and reagent volumes, and high output of test results. The susceptibility of ligands to denaturation, however, has impeded production of a stable, reproducible biochip platform, limiting most array assays to manual or, at most, semiautomated processing techniques. Such limitations may be overcome by novel biochip fabrication procedures. Methods: After selection of a suitable biochip substrate, biochip surfaces were chemically modified and assessed to enable optimization of biochip fabrication procedures for different test panels. The assay procedure was then automated on a dedicated instrument, and assay performance was determined for a panel of cytokine markers. Assay results were then compared with a commercial method for measurement of cytokine markers. Results: Secondary ion mass spectrometry and x-ray photoelectron spectroscopy demonstrated appropriate and reproducible modification of the biochip surface. Contact-angle studies also confirmed generation of hydrophobic surfaces that enabled containment of droplets for fabrication of discrete test regions. Automation of the biochip assays on a dedicated instrument produced excellent cytokine marker performance with intra- and interassay imprecision &lt;10% for most analytes. Comparison studies showed good agreement with other methods (r = 0.95–0.99) for cytokines. Conclusion: Performance data from this automated biochip array analyzer provide evidence that it is now possible to produce stable and reproducible biochips for output of more than 2000 test results per hour.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Fu, Zhongtian, Kefei Song, Luping Zhou y Yang Yang. "Noise-Aware Image Captioning with Progressively Exploring Mismatched Words". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 11 (24 de marzo de 2024): 12091–99. http://dx.doi.org/10.1609/aaai.v38i11.29097.

Texto completo
Resumen
Image captioning aims to automatically generate captions for images by learning a cross-modal generator from vision to language. The large amount of image-text pairs required for training is usually sourced from the internet due to the manual cost, which brings the noise with mismatched relevance that affects the learning process. Unlike traditional noisy label learning, the key challenge in processing noisy image-text pairs is to finely identify the mismatched words to make the most use of trustworthy information in the text, rather than coarsely weighing the entire examples. To tackle this challenge, we propose a Noise-aware Image Captioning method (NIC) to adaptively mitigate the erroneous guidance from noise by progressively exploring mismatched words. Specifically, NIC first identifies mismatched words by quantifying word-label reliability from two aspects: 1) inter-modal representativeness, which measures the significance of the current word by assessing cross-modal correlation via prediction certainty; 2) intra-modal informativeness, which amplifies the effect of current prediction by combining the quality of subsequent word generation. During optimization, NIC constructs the pseudo-word-labels considering the reliability of the origin word-labels and model convergence to periodically coordinate mismatched words. As a result, NIC can effectively exploit both clean and noisy image-text pairs to learn a more robust mapping function. Extensive experiments conducted on the MS-COCO and Conceptual Caption datasets validate the effectiveness of our method in various noisy scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Hempel, Gerd, Stefan Miltenyi y Volker Huppert. "Next Level of Automated Cell Processing System for Flexible Manufacturing of Cellular Products." Blood 116, n.º 21 (19 de noviembre de 2010): 1186. http://dx.doi.org/10.1182/blood.v116.21.1186.1186.

Texto completo
Resumen
Abstract Abstract 1186 Introduction: Currently, manufacturing of cellular products for cellular therapies is done manually or semi-automated. To make cellular therapies applicable for routine use, a standardized production of cellular therapeutic agents is necessary. Therefore closed and highly automated manufacturing procedures are required. Solution: A new integrated cell processing device has been developed to automate and standardize the manufacturing process of cellular therapeutic agents and to handle several cell handling procedures in a fully automated and unified way. These procedures are filtering, centrifugation, temperature-controlled centrifugation, magnetic separation and cell culture. A functionally closed tubing system was developed to allow the application of specific combinations of the manufacturing steps listed. A new type of centrifugation chamber was designed to enable in-process liquid exchange and cell fractionation. Integrated ports allow controlled adding and removal of liquid during the centrifugation process. Our results show that in this chamber erythrocytes, mononuclear cells and plasma fractions can be obtained directly from bone marrow or apheresis products by a density gradient separation process. To enable temperature sensitive process steps, components to adjust and control the temperature in the system have been developed. This newly developed heat exchange cartridge allows temperature changes from 4°C to 42°C within minutes. For magnetic separation of specifically labeled cell population a controllable magnetic field has been integrated into the device which allows automated separation of magnetically labeled cells. Adult stem/progenitor cells, antigen-specific T cells, T cell subsets, blood dendritic cells and NK cells have been isolated with a performance (purity, yield) comparable to semi-automated reference systems. Also a new type of cell culture compartment has been designed allowing automated cultivation and/or differentiation of cells directly in a closed system. During cultivation, the cell culture compartment allows temperature and atmosphere control, media exchange, adding of stimulation or differentiation agents and visual inspection of the cells by an integrated microscope camera. Cell lines (K562, OKT3 hybridoma, CHO) and human primary T cells have been successfully cultivated in this system with growth curves comparable to cultivation in standard cell culture flasks. To control and automate all integrated cell handling procedures, a central controller has been developed which also documents all process parameters. Comparative studies of different cell handling procedures show significant lower inter- and intra-process variations in comparison to manual or semi-automated reference cell manufacturing systems. Conclusions: This integrated cell processing device can handle all current technical requirements for manufacturing cellular therapeutic agents by the complete automation of process steps in a functionally closed environment. This leads to automated, standardized cell manufacturing processes enabling innovative cellular therapies for routine use. Disclosures: Hempel: Miltenyi Biotec GmbH: Employment. Miltenyi:Miltenyi Biotec GmbH: Membership on an entity's Board of Directors or advisory committees. Huppert:Miltenyi Biotec GmbH: Employment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Peyrin-Biroulet, L., S. Adsul, J. Dehmeshki y O. Kubassova. "DOP58 An artificial intelligence–driven scoring system to measure histological disease activity in Ulcerative Colitis". Journal of Crohn's and Colitis 16, Supplement_1 (1 de enero de 2022): i105. http://dx.doi.org/10.1093/ecco-jcc/jjab232.097.

Texto completo
Resumen
Abstract Background Histological remission is increasingly regarded as an important and deep therapeutic target for ulcerative colitis (UC). Assessment and scoring of histological images is a tedious procedure, that can be imprecise and prone to inter- and intra-observer variability. Therefore, a need exists for an automated method that is accurate, reproducible and reliable. This study aimed to investigate whether an artificial intelligence (AI) system developed using image processing and machine learning algorithms could measure histological disease activity based on the Nancy index. Methods A total of 200 histological images of patients with UC from a database at University Hospital, Vandoeuvre-lès-Nancy, France were used for this study. The novel AI system was used to fully characterise histological images and automatically measure Nancy index. The in-house AI algorithm was developed using state-of-the-art image processing and machine learning algorithms based on deep learning and feature extraction. The cell regions of each image, followed by Nancy index, were manually annotated and measured independently by 3 histopathologists. Manual and AI-automated measurements of Nancy index score were done and assessed using the intraclass correlation coefficient (ICC). Results The 200-image dataset was divided into 2 groups (80% was used for training and 20% for testing). ICC statistical analyses were performed to evaluate AI tool and used as a reference to calculate the accuracy (Table 1). The average ICC amongst the histopathologists was 89.33 and average ICC between histopathologists and AI tool was 87.20. Despite the small number of image data, the AI tool was found to be highly correlated with histopathologists. Conclusion The high correlation of performance of the AI method suggested promising potential for IBD clinical applications. A standardised and validated histological AI-driven scoring system can potentially be used in daily IBD practice to eliminate the subjectivity of the pathologists and assess the disease severity for treatment decision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Edlin, Joy, Justin Nowell, Christopher Arthurs, Alberto Figueroa y Marjan Jahangiri. "Assessing the methodology used to study the ascending aorta haemodynamics in bicuspid aortic valve". European Heart Journal - Digital Health 2, n.º 2 (1 de junio de 2021): 271–78. http://dx.doi.org/10.1093/ehjdh/ztab022.

Texto completo
Resumen
Abstract Aims Modern imaging techniques provide evermore-detailed anatomical and physiological information for use in computational fluid dynamics to predict the behaviour of physiological phenomena. Computer modelling can help plan suitable interventions. Our group used magnetic resonance imaging and computational fluid dynamics to study the haemodynamic variables in the ascending aorta in patients with bicuspid aortic valve before and after isolated tissue aortic valve replacement. Computer modelling requires turning a physiological model into a mathematical one, solvable by equations that undergo multiple iterations in four dimensions. Creating these models involves several steps with manual inputs, making the process prone to errors and limiting its inter- and intra-operator reproducibility. Despite these challenges, we created computational models for each patient to study ascending aorta blood flow before and after surgery. Methods and results Magnetic resonance imaging provided the anatomical and velocity data required for the blood flow simulation. Patient-specific in- and outflow boundary conditions were used for the computational fluid dynamics analysis. Haemodynamic variables pertaining to blood flow pattern and derived from the magnetic resonance imaging data were calculated. However, we encountered problems in our multi-step methodology, most notably processing the flow data. This meant that other variables requiring computation with computational fluid dynamics could not be calculated. Conclusion Creating a model for computational fluid dynamics analysis is as complex as the physiology under scrutiny. We discuss some of the difficulties associated with creating such models, along with suggestions for improvements in order to yield reliable and beneficial results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Rich, Ivan N., Karren Hall y Stephen H. Bartelmez. "Quality Control of Cell Processing and Cord Blood Storage Using HALO To Measure Proliferative Potential of Stem and Progenitor Cells." Blood 104, n.º 11 (16 de noviembre de 2004): 5016. http://dx.doi.org/10.1182/blood.v104.11.5016.5016.

Texto completo
Resumen
Abstract Prior to the introduction of flow cytometry to detect and measure CD34+ cells in the early 1980s, quality control of stem cell transplants was performed using the granulocyte-macrophage colony-forming assay (GM-CFC). The major disadvantages of this growth potential assay are that progenitor and not stem cells are detected and the assay takes too long to perform. Yet, measurement of the growth potential of proliferating cell populations in vitro is the only technique that guarantees the presence of stem cells after cell processing and may predict engraftment and reconstitution. There are no assays presently that exhibit these characteristics in real-time. However, it is now possible to measure the proliferative potential of stem and progenitor cells after cell processing and/or storage in half the time required by the classical colony-forming assay (CFA). Originally developed for hemotoxicity testing for all stages of drug development, HALO (Hematopoietic/Hemotoxicity Assays via Luminescence Output) can be used as an in-line stem cell processing quality control procedure as well as a method to ensure the growth potential of processed and stored umbilical cord blood. HALO sensitively measures small numbers of proliferating cells and because cell differentiation is not detected, manual enumeration of colonies is not required. The assay is based on the CFA procedure. However, as cells proliferate in response to growth factors, there is a proportional increase in the intracellular ATP concentration. The assay is performed in a 96-well plate and release of intracellular ATP drives a luciferin/luciferase reaction to produce bioluminescence in the form of light measured in a plate luminometer. The kinetics of most hematopoietic proliferating populations, including mature stem cells, allows results to be obtained after 6–7 days in culture. Measurement of luminescence for all 96 well cultures takes about 5 minutes. An ATP standard dose response performed prior to sample measurement not only provides quality control of the reagents used, but intra- and interlaboratory comparison of results. Due to the rapid and easy set-up procedure with high-throughput capability, the proliferative capacity of several stem and progenitor cell populations from either single or large numbers of samples from bone marrow, peripheral blood or cord blood can be analyzed simultaneously. Rapid assay turnaround and multiparameter capability should therefore allow an increased safety factor for patients undergoing stem cell transplantation. Additional studies are underway to further validate this approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Lye, Melvin, Christoph Eberle, Ann Wang, Geoffrey K. Feld y Namyong Kim. "Abstract 1885: Semi and fully automated immunostaining sample preparation platforms improve live leukocyte recovery, reproducibility, and cytometry data quality". Cancer Research 82, n.º 12_Supplement (15 de junio de 2022): 1885. http://dx.doi.org/10.1158/1538-7445.am2022-1885.

Texto completo
Resumen
Abstract Limited innovation in automated cell and organelle sample preparation methodology limits the effectiveness of modern analytical methods, such as single-cell ‘omics, flow and mass cytometry. These techniques traditionally rely on manual centrifugation-based protocols for cell washing and suspension preparation, hampering researchers’ access to the reproducibility and scalability benefits of automation. We have developed a suite of cell suspension preparation systems that enable semi and full automation of cell washing protocols. These Laminar Wash࣪ technologies robustly, gently, and efficiently remove debris, dead cells, and unbound reagent using laminar flow and liquid handling robotics, rather than turbulent and harsh pelleting-plus-pipetting methods. Adaptation of standard protocols to Laminar Wash automation typically improves repetitive immunostaining processes and workflows, in terms of reduced hands-on time and inter- and intra-operator variability. We demonstrate the superior live cell retention and reproducibility of Laminar Wash over centrifugation in processing murine and humanized mouse peripheral blood mononuclear cells (PBMCs) and tumor infiltrating lymphocytes (TILs) for flow cytometry. Furthermore, we show how Laminar Wash improves flow cytometry data quality, in terms of debris removal and separation of immune cell subsets for both PBMCs and TILs. Overall, these results show how Laminar Wash methodology assists in standardizing sample preparation for cytometric analysis, an important and unmet need in cancer immunotherapy discovery and manufacturing workflows. Citation Format: Melvin Lye, Christoph Eberle, Ann Wang, Geoffrey K. Feld, Namyong Kim. Semi and fully automated immunostaining sample preparation platforms improve live leukocyte recovery, reproducibility, and cytometry data quality [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1885.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Wilkinson, Julie, Cecilia Smith, Sybil D’Costa y Enrique Rabellino. "Standardization of Functional Immune Cell-Based Assays: An Integral Aspect to Vaccine and Biologic Development and Validation." Blood 106, n.º 11 (16 de noviembre de 2005): 3933. http://dx.doi.org/10.1182/blood.v106.11.3933.3933.

Texto completo
Resumen
Abstract The utility of the ex-vivo evaluation of immune cell functionality in the context of a) Determining an efficacious vaccine strategy for infectious diseases/cancer, b) Determining a tolerance profile in autoimmunity and transplantation, and c) understanding the basic mechanisms of immune cell responses in disease pathogenesis is well recognized. However, the benefit of these assays as surrogate markers of immune cell activity in vivo has not been fully realized due to the variable nature of these in vitro assays which is particularly pronounced in T cell functional assays. This variability arises from a variety of factors ranging from choice of assay, source of the cells, the sample processing methodology (isolation, freezing, thawing, and culturing), sample staining protocol for the chosen assay and ultimately data analysis, and data reduction. With a view to reducing variability and standardizing targeted steps of T cell functional assays, an automated methodology for simultaneous staining and analysis of multiple intracellular cytokines and cytotoxicity markers via flow cytometry was developed and validated. A 5-color flow cytometry assay (2–3 surface markers; 2 intracellular markers) was developed to characterize the restricted polyclonal (SEB/CD28) and antigen specific (CEF peptide pool) cytokine and cytotoxic profile response in human PBMCs. A modification to available sample preparation instruments was performed that enabled the automated pipetting, incubation, and staining of intracellular and surface molecules of stimulated human peripheral blood mononuclear cell populations (PBMC) for flow cytometric analysis. Statistically significant reductions in both inter and intra assay variability was observed in the automated methodology as compared to the manual assay with improvements in CVs for positive cell numbers and mean fluorescence intensity. For example, the inter assay CVs for IFNg cytokine producing CD4+ T cell populations improved from approximately 15 to 5, while the mean fluorescence intensity improved nearly 5 fold with automation. Importantly, the automated methodology furnished comparable responses in percent positive cytokine/cytotoxicity profiles as compared to the manual method while reducing the “handson” sample preparation and analysis time from 2 hours to 20 minutes. With the standardization of functional assays, other sources of variability in assays result can now be addressed specifically e.g. specimen handling, freezing, thawing, culturing, or biological. Standardized multiparametric functional profiling of the cells thus reveals the complex nature of the immune response and lends credence to their use as surrogate markers of efficacy and functionality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

BOUMAZA, Karima, Abdelhamid Loukil y Kaouthar Aarizou. "Automatic Human Sperm Concentrartion in microscopic videos". Medical Technologies Journal 2, n.º 4 (5 de enero de 2019): 301–7. http://dx.doi.org/10.26415/2572-004x-vol2iss4p301-307.

Texto completo
Resumen
Abstract Background: Human sperm cell counting analysis is of significant interest to biologists studying sperm function and to medical practitioners evaluating male infertility. Currently the analysis of this assessment is done manually by looking at the sperm samples through a phase-contrast microscope using expert knowledge to do a subjective judgement of the quality. Aims: to eliminate the subjective and error prone of the manual semen analysis and to avoid inter and intra-laboratory inconsistencies in semen analysis test results Methods: In this paper we introduce a technique for human sperm concentration. Its principle is based on the execution of three steps: The first step in unavoidable. It concerns the pretreatment of the human sperm microscopic videos which consists of a conversion of the RGB color space into the YCbCr space, the “Gaussian filtering” and the “discrete wavelet filtering”. The second step is devoted to the segmentation of the image into two classes: spermatozoas and the background. To achieve this, we used an edge detection technique “Sobel Contour detector”. The third step is to separate true sperm from false ones. It uses a machine learning technique of type decision trees that consist on two classes classification based on invariant characteristics that are the dimensions of the bounding ellipse of the spermatozoid head as well as its surface. Results: To test the robustness of our system, we compared our results with those performed manually by andrologists. After results analysis, we can conclude that our system brings a real improvement of precision as well as treatment time which make it might be useful for groups who intend to design new CASA systems. Conclusion: In this study, we designed and implemented a system for automatic concentration assessment based on machine learning method and image processing techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Militello, Carmelo, Leonardo Rundo, Mariangela Dimarco, Alessia Orlando, Ildebrando D’Angelo, Vincenzo Conti y Tommaso Vincenzo Bartolotta. "Robustness Analysis of DCE-MRI-Derived Radiomic Features in Breast Masses: Assessing Quantization Levels and Segmentation Agreement". Applied Sciences 12, n.º 11 (29 de mayo de 2022): 5512. http://dx.doi.org/10.3390/app12115512.

Texto completo
Resumen
Machine learning models based on radiomic features allow us to obtain biomarkers that are capable of modeling the disease and that are able to support the clinical routine. Recent studies have shown that it is fundamental that the computed features are robust and reproducible. Although several initiatives to standardize the definition and extraction process of biomarkers are ongoing, there is a lack of comprehensive guidelines. Therefore, no standardized procedures are available for ROI selection, feature extraction, and processing, with the risk of undermining the effective use of radiomic models in clinical routine. In this study, we aim to assess the impact that the different segmentation methods and the quantization level (defined by means of the number of bins used in the feature-extraction phase) may have on the robustness of the radiomic features. In particular, the robustness of texture features extracted by PyRadiomics, and belonging to five categories—GLCM, GLRLM, GLSZM, GLDM, and NGTDM—was evaluated using the intra-class correlation coefficient (ICC) and mean differences between segmentation raters. In addition to the robustness of each single feature, an overall index for each feature category was quantified. The analysis showed that the level of quantization (i.e., the ‘bincount’ parameter) plays a key role in defining robust features: in fact, in our study focused on a dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) dataset of 111 breast masses, sets with cardinality varying between 34 and 43 robust features were obtained with ‘binCount’ values equal to 256 and 32, respectively. Moreover, both manual segmentation methods demonstrated good reliability and agreement, while automated segmentation achieved lower ICC values. Considering the dependence on the quantization level, taking into account only the intersection subset among all the values of ‘binCount’ could be the best selection strategy. Among radiomic feature categories, GLCM, GLRLM, and GLDM showed the best overall robustness with varying segmentation methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Nouman Noor, Muhammad, Muhammad Nazir, Sajid Ali Khan, Oh-Young Song y Imran Ashraf. "Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network". Electronics 12, n.º 7 (26 de marzo de 2023): 1557. http://dx.doi.org/10.3390/electronics12071557.

Texto completo
Resumen
Gastrointestinal (GI) tract diseases are on the rise in the world. These diseases can have fatal consequences if not diagnosed in the initial stages. WCE (wireless capsule endoscopy) is the advanced technology used to inspect gastrointestinal diseases such as ulcerative-colitis, polyps, esophagitis, and ulcers. WCE produces thousands of frames for a single patient’s procedure for which manual examination is tiresome, time-consuming, and prone to error; therefore, an automated procedure is needed. WCE images suffer from low contrast which increases inter-class and intra-class similarity and reduces the anticipated performance. In this paper, an efficient GI tract disease classification technique is proposed which utilizes an optimized brightness-controlled contrast-enhancement method to improve the contrast of the WCE images. The proposed technique applies a genetic algorithm (GA) for adjusting the values of contrast and brightness within an image by modifying the fitness function, which improves the overall quality of WCE images. This quality improvement is reported using qualitative measures, such as peak signal to noise ratio (PSNR), mean square error (MSE), visual information fidelity (VIF), similarity index (SI), and information quality index (IQI). As a second step, data augmentation is performed on WCE images by applying multiple transformations, and then, transfer learning is used to fine-tune a modified pre-trained model on WCE images. Finally, for the classification of GI tract disease, the extracted features are passed through multiple machine-learning classifiers. To show the efficacy of the proposed technique in the improvement in classification performance, the results are reported for the original dataset as well as the contrast-enhanced dataset. The results show an overall improvement of 15.26% in accuracy, 13.3% in precision, 16.77% in recall rate, and 15.18% in F-measure. Finally, a comparison with the existing techniques shows that the proposed framework outperforms the state-of-the-art techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Canu, Daniela, Chara Ioannou, Katarina Müller, Matthias Martin, Stephan Bender, Monica Biscaldi, Christian Fleischhaker, Nikos Smyrnis y Christoph Klein. "M79. COMPONENTS OF VISUAL SEARCH IN EARLY-ONSET SCHIZOPHRENIA, ADHD AND ASD: AN EYE TRACKING STUDY". Schizophrenia Bulletin 46, Supplement_1 (abril de 2020): S164—S165. http://dx.doi.org/10.1093/schbul/sbaa030.391.

Texto completo
Resumen
Abstract Background Superior visual search is a replicated finding in the literature on Autism Spectrum Disorder (ASD). Conversely, results from the literature on Attention-Deficit/Hyperactivity Disorder (ADHD) are more mixed, with some studies showing typical performance and others pointing out less efficient serial search in ADHD. Finally, most studies on visual search in Schizophrenia highlighted deficits in focal attentional processing. However, similarities between attentional impairments in the three clinical groups have also been reported. The primary goal of our study is to determine the diagnostic specificity of search deficits. To our knowledge, the literature systematically comparing the visual search performance in patients with ADHD and Schizophrenia is limited to one study, while no study has so far included an ASD group in the comparison. Methods Four groups of young adults, namely 29 typically developing (TD; 19.8±1.6, 41% males), 26 with ASD (19.7±1.9, 96% males), 28 with ADHD (19.9±1.4, 54% males) and 21 with Early-Onset Schizophrenia (SCZ; 19.7±1.7, 71% males) were presented a visual search task in which they had to quickly detect a target item among several similar-to-target distractor items. Eye movements were recorded binocularly with the Eye Link 1000+ system. Participants were matched on age and full-scale IQ. The ocular-motor behaviour was analysed using Data Viewer 3.2 and SPSS 2.3. Results Initiation of search – latency of the first saccade on the search grid – was typical in all participants except those with SCZ, who had significant higher intra-subject variability (ISV) than both TD and ADHD, but no delay in initiating search. Within search, ASD manifested significantly reduced mean and ISV of total search duration – between the first saccade on the grid and the last fixation on target – and of the first part of search – between the first saccade on the grid and the first fixation on target – in comparison with all other groups, including TD. Conversely, SCZ and ASD were significantly more variable than TD and ADHD regarding the duration of first fixation on target, while also being, to a lesser extent, slower than ADHD, but not than TD. Additionally, SCZ needed a higher frequency of fixations on target than ASD, but not compared to ADHD or TD, before making a decision. In the post-search phase – between the onset of the last fixation on target and the button press on the keyboard – SCZ were the slowest and most variable group, followed by ASD and ADHD who differed?non-significantly? from controls. The overall search performance – between trial onset and button press – resulted in typical manual mean RT in ASD and ADHD while being atypically longer in SCZ, compared to all other groups. Compared to TD, ISV was at par in ADHD, lower in ASD, and higher in SCZ. Discussion Results suggest that the ability to extract individual targets is intact in ASD and ADHD. However, ASD only show a bias toward local information, as indicated by more variable first fixation duration, despite intact global processing. By contrast, lower search efficiency in SCZ might be explained by both (a) abnormal global processing due to impairment in the guidance mechanisms that affect the time until the first fixation on target, and (b) a deficit in central discrimination, with resulting difficulties in extracting critical features of the target.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Kjaer, Magnus Ruud, Umaer Hanif, Andreas Brink-Kjaer y Emmanuel Mignot. "0512 ABED: Automatic Sleep-Disordered Breathing Event Detection". SLEEP 47, Supplement_1 (20 de abril de 2024): A220. http://dx.doi.org/10.1093/sleep/zsae067.0512.

Texto completo
Resumen
Abstract Introduction Sleep disordered breathing (SDB) affects more than 10% of the global population and is associated with cardiovascular diseases. Manual annotation of SDB events is time consuming and suffers from inter-rater variances and biases. We propose Automatic Sleep-Disorder Breathing Event Detection (ABED), a method capable of localizing and classifying obstructive apneas (OA), central apneas (CA), and hypopneas (HYPO). Methods ABED was trained on 5,456 PSGs and evaluated on 1,099 PSGs from 4 different cohorts. The respiratory modalities included measures of nasal flow, oral thermistor, thoracoabdominal plethysmography, and blood oxygen saturation. These were passed directly as input for ABED. EEG and EOG were passed through another automated detector to ascertain arousals and wake probabilities, which was also passed as input. ABED uses an end-to-end deep learning architecture based on state-of-the-art object detection methods Unified-Real-Time-Object-Detection, ResNet for feature extraction and bi-directional long-short-term-memory layers for temporal processing. ABED was trained on 240 second windows and apnea classes were balanced by sampling less prevalent classes (OA, CA) multiple times during training. ABED was optimized based on event class (cross-entropy loss) and position (Huber loss) weighted towards the class. At the point of inference, non-maximum suppression was used to remove overlapping predictions. Results Results show that the apnea-hypopnea index (AHI) values based on ABED’s predictions correlate well with experts (R^2=0.86;β_0=0.95,β_1=3.29 ) and predict sleep apnea diagnosis (AHI≥15) and severity (no, mild, moderate, severe) with a mean accuracy of 89% and 76%, respectively. ABED was able to predict the central apnea index (CAI) with a high correlation (R^2=0.82;β_0=1.16,β_1=0.51 ). When inferring individual events, ABED achieved an overall F1 score of 0.78 in distinguishing between the presence of an event or regular breathing. For specific event types, the F1 scores were: 0.72 OA, 0.51 for CA, and 0.66 for HYPO. Conclusion ABED was trained and tested on a multi-cohort dataset and demonstrates generalizing properties on intra-cohort variability along with consistent scoring, making ABED suitable as support for medical experts in SDB annotation. Overall apnea event detection performance demonstrates that ABED can discern between apneic events and regular breathing. Additional work will involve integration of additional cohorts. Support (if any)
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Mytnyk, T. H., V. O. Manukalo, O. V. Dubrovina y Mytnyk O.O. "ON THE ISSUE OF SPECIFYING RELIABILITY CRITERIA USED IN AUTOMATED SPACE-TIME CONTROL OF METEOROLOGICAL OBSERVATION DATA". Hydrology, hydrochemistry and hydroecology, n.º 3 (69) (2023): 51–60. http://dx.doi.org/10.17721/2306-5680.2023.3.4.

Texto completo
Resumen
In order to ensure a reliability of data on a state of surface layer of atmosphere, data of meteorological observations conducted on the network of the Hydrometeorological Service of Ukraine are subject to regular multi-level technical and critical control. The entire set of errors in data of meteorological measurements, according to a nature of their manifestation, are divided into two groups: 1) random errors associated with instantaneous pulsations of measured meteorological valuables and their distortion during their further collection and processing; 2) systematic errors, which depend on defect in a devices design, their technical condition, as well as errors caused by violation of measurement technique and an influence of local features on measured meteorological valuables. In order to separate measurement errors from the manifestation of natural features of weather processes, all observational data coming from meteorological stations are subject to regular critical control, during which their reliability, that is, suitability for further use, is established. There are two main types of critical control of measurement data – intra-station and spatio-temporal. The basis of intra-station control is an analysis of values of interrelated meteorological values measured on one station. This is done at hydrometeorological stations with a help of special computer programs. Spatial-temporal control consists in comparing averaged over a certain period of time (decade, month) values of spatial fields of meteorological variables around a station, the data of which is checked/controlled. Local distortions of the field of meteorological values under an influence of local natural factors can be partially eliminated if we compare not values of meteorological values themselves, but their deviations from a climatic norm. Carrying out manual spatio-temporal control is a long and routine work, especially when it comes to analyzing a change in time of data observational inconsistencies. This made it necessary to conduct a study on improving the method of automated spatio-temporal data control of meteorological observations, and to present obtained results in a form of a normative document to be implemented in the activitiy of hydrometeorological organizations of the State Emergency Service of Ukraine. Such research was carried out at the Ukrainian Hydrometeorological Institute for 2020-2022. The purpose of the publication is to present scientific and methodological foundations of development and main provisions of the normative document “Clarification of reliability criteria used during automated spatio-temporal control of data from meteorological observations of stations. Methodical recommendations”, as well as the computer program “MeteoControl”, which forms a database of non-connections and calculates statistical characteristics, are needed for analyzing the array of non-connections and making a decision on the feasibility of changing the reliability criteria. The developed regulatory document provides a number of recommendations for specifying reliability criteria of meteorological observation data, in particular: 1) specification of a reliability criteria for all meteorological variables that are determined on a network of stations should be carried out at least once every 5 years; 2) before clarifying reliability criteria, it is necessary to assess an extent to which a current reliability criteria reveal a data, as well as to establish how many stations and how often in recent years have discrepancies that exceed a current reliability criterion; 3) before making a decision to change a reliability criterion for all stations, it is advisable to find out how, in connection with an introduction of new reliability criterion, a number of stations that will require additional analysis will change, or vice versa – whose data will be considered reliable despite to their dubious or erroneous values.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Amd, Micah. "Intra-group differences in skin tone influence evaluative and perceptual face processing". PLOS ONE 19, n.º 1 (2 de enero de 2024): e0296172. http://dx.doi.org/10.1371/journal.pone.0296172.

Texto completo
Resumen
In an exploration of colorist biases across native Melanesian participants, we employed a multi-method approach across three studies to examine evaluative and perceptual processing of ‘lighter’ and ‘darker’ non-Melanesian facial targets controlled for attractiveness, sex, and ethnicity. In Study 1, 305 participants evaluated facial attractiveness using surveys. In Study 2, 153 participants alternately mapped lighter and darker faces with positive and neutral attributes across brief Implicit Association Tests. In Study 3, 61 participants underwent a manual sorting task followed by a ’breaking’ continuous flash suppression (b-CFS) paradigm to probe ’non-conscious’ perceptual biases. Across evaluative measures, male and female respondents consistently preferred lighter-skinned, highly attractive male faces. During b-CFS, lighter and attractive opposite-sex faces entered awareness (‘broke suppression’) faster than their darker counterparts. We speculate that skin tone may operate as a perceptually salient cue in the presence of facial configurations signaling high reproductive potential.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Richter, Anne, Michaela Niemöller, Inken Verwohl, Katrin Lange, Anna Foerster-Marniok, Olaf Brauns, Sven Kramer et al. "Rapid and Semi-Automated Screening of Multiple Samples for Antigen-Specific T Lymphocytes". Blood 120, n.º 21 (16 de noviembre de 2012): 4839. http://dx.doi.org/10.1182/blood.v120.21.4839.4839.

Texto completo
Resumen
Abstract Abstract 4839 Functional characterizations of T lymphocytes are performed to gain understanding on their contribution in certain immunological situations, to monitor the course of diseases, and to track therapeutic interventions. Meanwhile the flow cytometric analysis of antigen-specific T cells examined for intracellular cytokine production and expression of activation markers after a short-term in vitro antigenic challenge is a well-established method for research applications. Despite the advantages of this approach to qualify samples on a single cell level and by multiple parameters, the broad use of this analysis for immune monitoring purposes is hampered. Screening of a lot of samples is time-consuming, requires many manual handling steps, and operator experience in flow cytometric analysis of stimulated T cell samples. To overcome these hurdles, we worked out a complete strategy to rapidly study cytokine and activation marker profiles in antigen-specific T cells of multiple samples by a semi-automated process. For the simultaneous analysis of multiple samples we examined for the T cell stimulation an antigen pre-coated 96-strip-well culture system. This flexible and ready-to-use format provides the opportunity to screen either for a single antigen or in parallel for up to twelve antigen specificities by combining 8-well-strips possessing different antigens. The coated antigens consist of pools of overlapping 15-mer peptides derived from a single viral protein of cytomegalovirus, Epstein-Barr-Virus, or adenovirus. The peptide pools have been designed for activation of the specific CD4+ as well as CD8+ T cells. They are solubilized and thereby accessible for T cell stimulation after addition of a cell sample, e.g. peripheral blood mononuclear cells, suspended in culture medium into the antigen-coated well. After a stimulation period of six hours the induced T cell response is comparable to the activation with a conventional lyophilized and reconstituted peptide pool. To reduce the time and work load for cell harvesting, fixation, permeabilization, and staining, we developed a protocol and reagents to allow a rapid and easy-to-handle intracellular staining procedure. Compared to conventional staining protocols, all steps are executed in the 96-strip-well culture plate, i.e. cell harvesting is dispensable. Without any washing step, cells are fixed and stained with defined reagent cocktails containing antibodies to identify virus-specific CD3+ CD4+ CD154+ and/or CD3+ CD8+ T cells and various Anti-cytokine-fluorochrome conjugates to evaluate the cytokine pattern. With these modifications, we drastically diminished the overall processing time for the staining of up to 96 samples to only 50 minutes. Furthermore, we integrated an automated flow cytometric analysis process. This includes the possibility to measure the samples in the 96-strip-well plates hands-free using pre-defined experiment settings and acquisition templates. We also applied an automated gating strategy for the data analysis. Finally, a report summarizes the results of the T cell response against several viral proteins for all samples tested, e.g. frequencies of cytokine+ CD154+ CD4+ and cytokine+ CD8+ T cell subsets are indicated. Hands-on time for the multi-sample acquisition and analysis is only minimal and the standardized reagents/protocol and sample analysis process decrease inter- and intra-assay variations. In summary, with our newly developed tools and protocols for in vitro T cell stimulation, staining of activation markers as well as intracellular cytokines, and automated flow cytometric analysis we have set up a fast and convenient procedure to routinely monitor antigen-specific T cell responses. Disclosures: Richter: Miltenyi Biotec GmbH: Employment. Niemöller:Miltenyi Biotec GmbH: Employment. Verwohl:Miltenyi Biotec GmbH: Employment. Lange:Miltenyi Biotec GmbH: Employment. Foerster-Marniok:Miltenyi Biotec GmbH: Employment. Brauns:Miltenyi Biotec GmbH: Employment. Kramer:Miltenyi Biotec GmbH: Employment. Höher-Peters:Miltenyi Biotec GmbH: Employment. Büscher:Miltenyi Biotec GmbH: Employment. Assenmacher:Miltenyi Biotec GmbH: Employment. Schmitz:Miltenyi Biotec GmbH: Employment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Miermans, Karsten CA, Hannes Lüling, Aleksandra Mezydlo, Franz Elsner, Felix Kunzweiler, Felix Spöttel, Artur Toloknieiev et al. "Real-World Demonstration of AI to Clinical Cytometry for Rapid and Reliable Decision Support and Automated Reporting of Blood Cancers Including B-NHL and Acute Leukemias". Blood 142, Supplement 1 (28 de noviembre de 2023): 3661. http://dx.doi.org/10.1182/blood-2023-182082.

Texto completo
Resumen
Introduction: Flow cytometry is an integral part of routine diagnostics for hematological malignancies. Unfortunately, a decreasing number of skilled operators have to cope with increasing case volumes. The current methods are also known to produce inter- and intra-observer variability. Aim: To reduce labor time, dependency on expert knowledge and interpretation variability, we aimed to build and validate clinical-grade decision support software in the routine using machine learning (ML) for clinical cytometry. Methods: We obtained ≈100k flow cytometry cases from multiple centers, taken from clinical routine and annotated by expert users using multiple diagnostic methods. These cases span the full spectrum of hematological malignancy, including 47% non-malignant reference samples. As the data were not measured using harmonized protocols, we developed data pooling techniques to merge the data, including synthetic imputation of missing markers. After data processing, we applied multi-layer artificial neural networks to predict sub-type or non-malignancy (B-NHL and acute leukemias, ten classes at time of writing), directly from the data without any human preprocessing. Transfer learning was then used to fine-tune the model to a particular protocol for which the model could be used in routine. Besides sub-typing recommendations, users need to report cell population frequencies, traditionally measured using manual ‘gating’. We trained separate supervised ML models to classify single cells based on a combination of curated and refined routine gating data, expanded with in-house annotations produced by a highly trained cytometrist. To enable clinicians to receive the diagnostic recommendations and produce reports for the treating doctor, we built a CE-IVD web-application (“hema.to”) in which selected users could upload cytometry data, inspect the raw data including abnormalities in the immunophenotype, and audit the ML recommendations against the WHO criteria. The user feedback from each case was recorded, from which the consistency of the ML recommendations with expert opinion could be compared. Before the product was integrated in the clinical routine, a ‘dry run’ retrospective two-arm, four-center blinded clinical study using 96 randomly selected non-overlapping B-NHL cases revealed that analysis time was reduced by &gt;2x and accuracy saw a slight (non-significant) increase against a gold standard as compared to the control arm with a traditional workflow. Results: A worst-case sub-typing performance was computed by testing the model against historic data without expert supervision, and found an f1 score of &gt;90% and sensitivity of 96%. The ML models for acute leukemias specifically show an f1 score of ≈90%, a false positive rate of ≈1.5% and false negative rate of 2-6%. These quality metrics demonstrate that the system can be used for both screening and sub-typing. We therefore integrated hema.to into the routine for B-NHL diagnosis at Result Laboratorium (Dordrecht) and the HpH (Hamburg) since January, for the latter including a deep integration into the laboratory database with automated reporting of expressed CD markers. To date, over a thousand routine cases have been analyzed using the system. We then measured the relationship between predicted sub-typing confidence of the model and agreement with the final expert judgment. The data showed that, on average, confidence≈accuracy ( R2=0.94), indicating that the model is neither over- nor underconfident. As such, the model is a “trustworthy” classifier. Expert agreement over time revealed that the quality of the diagnostic recommendations has been stable over time despite a change of device type and was consistently in excess of 90% across various metrics, including a 100% specificity during the four most recent recorded months. Conclusions: We've shown that machine learning can offer decision support and automated reporting for screening and classifying blood cancers from flow cytometry in a real-world routine setting. Our next focus is to expand the number of indications and increase the model sensitivity to small pathological populations, including measurable residual disease. These results represent major strides towards decision support software for any lab and all hematological neoplasms. Such a software will not only speed up and simplify diagnostic workflows, but also improve the quality of the analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Digby, Richard, Piravin Ramakrishnan, Saad Moughal, Arundhati Chakrabarty y Ryan Mathew. "QLTI-19. EVALUATION OF INTRA-OPERATIVE BRAIN TUMOUR DIAGNOSTIC SERVICES – A LARGE TERTIARY UK CENTRE EXPERIENCE". Neuro-Oncology 24, Supplement_7 (1 de noviembre de 2022): vii238. http://dx.doi.org/10.1093/neuonc/noac209.921.

Texto completo
Resumen
Abstract INTRODUCTION Brain tumour intraoperative diagnosis (smear cytology, frozen section) is a commonly performed, routine diagnostic service. Currently, samples must be transported from the operating room (OR) to pathology, impacting turnaround time (TAT), carbon emissions (if cross-site), and motivation for repeat sampling. We performed a broad evaluation of current practice in a large, tertiary, UK brain tumour centre, to identify potential gains in real-time tissue diagnosis. METHODS All brain tumour samples (n=228) sent for intraoperative diagnosis in 2021 were analysed retrospectively. TAT was assessed by capturing different timepoints along the pathway. Concordance between diagnoses at the following stages was determined: preoperatively based on radiology, intraoperatively (frozen section or smear), provisional paraffin and final integrated. Additionally, we anonymously surveyed neurosurgeons’ opinions (n=18) on the current service. RESULTS The mean (±SD) specimen transportation time was 10.6±2.0 minutes, with an estimated total TAT of 30-60 minutes. Intraoperative diagnosis provided a slightly higher rate of concordance with provisional paraffin diagnosis than preoperative radiological diagnosis (89.5% vs 86.3%). Non-concordance was most commonly due to non-representative sampling (e.g., predominantly necrotic), with no repeat sample being sent/available intraoperatively. Prevailing neurosurgical opinion of the intraoperative diagnostic service was dissatisfaction or neutrality (50% and 39% of respondents), with a minority being positive (11%). Reasons for this included: intraoperative delay due to TAT (47%), perceived inaccuracy of results (41%), and perceived reduced out-of-hours availability (56%). CONCLUSIONS Current brain tumour intraoperative diagnostic practice relies on physical sample transportation and manual processing; the resultant long TAT causes surgeon dissatisfaction and dissuades repeat analysis in the case of non-representative sampling. Real-time tissue diagnostic technologies such as OR-sited probe-based confocal endomicroscopy, scanners and Raman spectroscopy should be considered to facilitate faster and repeated examination. The latter may have additional benefits in real-time expert pathology feedback, tumour margin-zone analysis and increased extent of resection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Entezari, Pegah, Aya Kino, Mauricio S Galizia, Vahid Yaghmai y James C Carr. "Abstract 251: Semiautomated Measurement of Ascending Aortic Aneurysm Versus Manual Measurement: Assessment of Variability". Arteriosclerosis, Thrombosis, and Vascular Biology 32, suppl_1 (mayo de 2012). http://dx.doi.org/10.1161/atvb.32.suppl_1.a251.

Texto completo
Resumen
Purpose: To evaluate inter method and intra observer variability of a novel semi-automated method for measurement of ascending aorta aneurysm (AAA) in ECG gated dual source CT angiogram (DSCTA). Material and Method: Under an IRB approved protocol, multiplanar reformatted images of 40 patients with AAA were measured in DSCTA images. Transaxial maximum diameter of inner wall to inner wall was measured at five anatomic locations of thoracic aorta: annulus, sinus, sinotubular junction (STJ), mid ascending aorta at the level of right pulmonary artery (MAA), and proximal aortic arch immediately proximal to innominate artery (PROX) (Figure 1). Measurements were performed by conventional manual method and the novel semi-automated segmentation software and compared. Inter method and intra observer variability were evaluated according to Intra Class Correlation (ICC). Time required for post processing of the aorta with both methods was recorded in all the cases. Results: Mean maximum diameters in semi-automated method were: annulus 31.10 mm, sinus: 41.95 mm, STJ: 38.05 mm, MAA: 41.88 mm, and PROX: 37.60 mm. Manual method showed mean maximum diameter of 30.87 mm, 42.05 mm, 36.87 mm, 41.18 mm, and 36.42 mm at annulus, sinus, STJ, MAA, and PROX, respectively. There was an excellent inter method and intra observer agreement between two methods at all five points (ICC > 0.9). The post processing time was 8’2’’±1’35” for the semi-automated method while this time was 19’± 20” for the manual method. Conclusion: The novel semi-automated tool demonstrated an excellent agreement with conventional manual method in measurement of AAA in ECG gated DSCTA with the advantage of significant reduction in the post processing time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Ewertowski, Nastassja Pamela, Christoph Schleich, Daniel Benjamin Abrar, Harish S. Hosalkar y Bernd Bittersohl. "Automated measurement of alpha angle on 3D-magnetic resonance imaging in femoroacetabular impingement hips: a pilot study". Journal of Orthopaedic Surgery and Research 17, n.º 1 (30 de julio de 2022). http://dx.doi.org/10.1186/s13018-022-03256-5.

Texto completo
Resumen
Abstract Background Femoroacetabular impingement (FAI) syndrome is an established pre-osteoarthritic condition. Diagnosis is based on both clinical and radiographic parameters. An abnormal manually calculated alpha angle in magnetic resonance imaging (MRI) is traditionally utilized to diagnose abnormal femoral head-neck offset. This pilot study aimed to assess the feasibility of automated alpha angle measurements in patients with FAI syndrome, and to compare automated with manual measurements data with regard to the time and effort needed in each method. Methods Alpha angles were measured with manual and automated techniques, using postprocessing software in nineteen hip MRIs of FAI syndrome patients. Two observers conducted manual measurements. Intra- and inter-observer reproducibility and correlation of manual and automated alpha angle measurements were calculated using intra-class correlation (ICC) analysis. Both techniques were compared regarding the time taken (in minutes) and effort required, measured as the amount of mouse button presses performed. Results The first observer’s intra-observer reproducibility was good (ICC 0.77; p < 0.001) while the second observer’s was good-to-excellent (ICC 0.93; p < 0.001). Inter-observer reproducibility between both observers in the first (ICC 0.45; p < 0.001) and second (ICC 0.56; p < 0.001) manual alpha angle assessment was moderate. The intra-class correlation coefficients between manual and automated alpha angle measurements were ICC = 0.24 (p = 0.052; observer 1, 1st measurement), ICC = 0.32 (p = 0.015; observer 1, 2nd measurement), ICC = 0.50 (p < 0.001; observer 2, 1st measurement), and ICC = 0.45 (p < 0.001; observer 2, 2nd measurement). Average runtime for automatic processing of the image data for the automated assessment was 16.6 ± 1.9 min. Automatic alpha angle measurements took longer (time difference: 14.6 ± 3.9 min; p < 0.001) but required less effort (difference in button presses: 231 ± 23; p < 0.001). While the automatic processing is running, the user can perform other tasks. Conclusions This pilot study demonstrates that objective and reliable automated alpha angle measurement of MRIs in FAI syndrome hips is feasible. Trial registration The Ethics Committee of the University of Düsseldorf approved our study (Registry-ID: 2017084398).
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Moftian, Nazila, Taha Samad Soltani, Zahra Salahzadeh, Hojjat Hossein Pourfeizi, Yousef Gheibi, Amir Fazlollahi y Peyman Rezaei-Hachesu. "Computer-aided Measurement System Using Image Processing for Measuring Cobb Angle in Scoliosis". Middle East Journal of Rehabilitation and Health Studies 9, n.º 1 (19 de diciembre de 2021). http://dx.doi.org/10.5812/mejrh.111360.

Texto completo
Resumen
Background: One of the spine deformities is scoliosis, and Cobb angle is generally used to assess it. Objectives: In this study, a computer-aided measurement system (CAMS) was presented as a new repeatable and reproducible approach to assess the Cobb angle in idiopathic scoliosis patients. Methods: Python libraries, including OpenCV and Numpy were used for image processing and design of the software. To assess the repeatability and reproducibility of the CAMS, a series of 98 anterior-posterior radiographs from patients with idiopathic scoliosis were used. Assessments were done by five independent observers. Each radiograph was assessed by each observer three times with a minimum break of two weeks among assessment. The single measure intraclass correlation coefficient (ICC), the mean absolute difference (MAD), and the standard error measurement (SEM) values were used for intra- and inter-observer reliability. Results: The inter-observer analysis indicated that the ICCs ranged from 0.94 - 0.99, and the MAD between manual and CAMS were less than 3°. For intra-observer measurements, the combined SEM between all observers for the manual and CAMS was 1.79° and 1.27°, respectively. An ICC value of 0.97 with 95% confidence interval (CI) was excellent in CAMS for inter-observer reliability. The MAD of CAMS was 2.18 ± 2.01 degrees. Conclusions: The CAMS is an effective and reliable approach for assessing scoliotic curvature in the standing radiographs of thoraco-lumbar. Moreover, CAMS can accelerate clinical visits, and its calculation results are reliable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Powers, Kristi, Raymond Chang, Justin Torello, Rhonda Silva, Yannick Cadoret, William Cupelo, Lori Morton y Michael Dunn. "Development of a semi-automated segmentation tool for high frequency ultrasound image analysis of mouse echocardiograms". Scientific Reports 11, n.º 1 (22 de marzo de 2021). http://dx.doi.org/10.1038/s41598-021-85971-3.

Texto completo
Resumen
AbstractEchocardiography is a widely used and clinically translatable imaging modality for the evaluation of cardiac structure and function in preclinical drug discovery and development. Echocardiograms are among the first in vivo diagnostic tools utilized to evaluate the heart due to its relatively low cost, high throughput acquisition, and non-invasive nature; however lengthy manual image analysis, intra- and inter-operator variability, and subjective image analysis presents a challenge for reproducible data generation in preclinical research. To combat the image-processing bottleneck and address both variability and reproducibly challenges, we developed a semi-automated analysis algorithm workflow to analyze long- and short-axis murine left ventricle (LV) ultrasound images. The long-axis B-mode algorithm executes a script protocol that is trained using a reference library of 322 manually segmented LV ultrasound images. The short-axis script was engineered to analyze M-mode ultrasound images in a semi-automated fashion using a pixel intensity evaluation approach, allowing analysts to place two seed-points to triangulate the local maxima of LV wall boundary annotations. Blinded operator evaluation of the semi-automated analysis tool was performed and compared to the current manual segmentation methodology for testing inter- and intra-operator reproducibility at baseline and after a pharmacologic challenge. Comparisons between manual and semi-automatic derivation of LV ejection fraction resulted in a relative difference of 1% for long-axis (B-mode) images and 2.7% for short-axis (M-mode) images. Our semi-automatic workflow approach reduces image analysis time and subjective bias, as well as decreases inter- and intra-operator variability, thereby enhancing throughput and improving data quality for pre-clinical in vivo studies that incorporate cardiac structure and function endpoints.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Fisher, Oliver J., Ahmed Rady, Aly AA El-Banna, Nicholas J. Watson y Haitham H. Emaish. "An image processing and machine learning solution to automate Egyptian cotton lint grading". Textile Research Journal, 20 de diciembre de 2022, 004051752211455. http://dx.doi.org/10.1177/00405175221145571.

Texto completo
Resumen
Egyptian cotton is one of the most important commodities for the Egyptian economy and is renowned globally for its quality, which is largely assessed and graded by manual inspection. This grading has several drawbacks, including significant labor requirements, low inspection efficiency, and influence from inspection conditions such as light and human subjectivity. This work proposes a low-cost solution to replace manual inspection with classification models to grade Egyptian cotton lint using images captured by a charge-coupled device camera. While this method has been evaluated for classifying US and Chinese upland cotton staples, it has not been tested on Egyptian cotton, which has unique characteristics and grading requirements. Furthermore, the methodology to develop these classification models has been expanded to include image processing techniques that remove the influence of trash on color measurements and extract features that capture the intra-sample variance of the cotton samples. Three different supervised machine learning algorithms were evaluated: artificial neural networks; random forest; and support vector machines. The highest accuracy models (82.13–90.21%) used a random forest algorithm. The models’ accuracy was limited by the human error associated with labeling the cotton samples used to develop the classification models. Unsupervised machine learning methods, including k-means clustering, hierarchical clustering, and Gaussian mixture models, were used to indicate where labeling errors occurred.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Gontard, Lionel C., Joaquín Pizarro, Borja Sanz-Peña, Simón P. Lubián López y Isabel Benavente-Fernández. "Automatic segmentation of ventricular volume by 3D ultrasonography in post haemorrhagic ventricular dilatation among preterm infants". Scientific Reports 11, n.º 1 (12 de enero de 2021). http://dx.doi.org/10.1038/s41598-020-80783-3.

Texto completo
Resumen
AbstractTo train, evaluate, and validate the application of a deep learning framework in three-dimensional ultrasound (3D US) for the automatic segmentation of ventricular volume in preterm infants with post haemorrhagic ventricular dilatation (PHVD). We trained a 2D convolutional neural network (CNN) for automatic segmentation ventricular volume from 3D US of preterm infants with PHVD. The method was validated with the Dice similarity coefficient (DSC) and the intra-class coefficient (ICC) compared to manual segmentation. The mean birth weight of the included patients was 1233.1 g (SD 309.4) and mean gestational age was 28.1 weeks (SD 1.6). A total of 152 serial 3D US from 10 preterm infants with PHVD were analysed. 230 ventricles were manually segmented. Of these, 108 were used for training a 2D CNN and 122 for validating the methodology for automatic segmentation. The global agreement for manual versus automated measures in the validation data (n = 122) was excellent with an ICC of 0.944 (0.874–0.971). The Dice similarity coefficient was 0.8 (± 0.01). 3D US based ventricular volume estimation through an automatic segmentation software developed through deep learning improves the accuracy and reduces the processing time needed for manual segmentation using VOCAL. 3D US should be considered a promising tool to help deepen our current understanding of the complex evolution of PHVD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Olaisen, Sindre, Erik Smistad, Torvald Espeland, Jieyu Hu, David Pasdeloup, Andreas Østvik, Svend Aakhus et al. "Automatic measurements of left ventricular volumes and ejection fraction by artificial intelligence: Clinical validation in real-time and large databases". European Heart Journal - Cardiovascular Imaging, 26 de octubre de 2023. http://dx.doi.org/10.1093/ehjci/jead280.

Texto completo
Resumen
Abstract Background and aims Echocardiography is a cornerstone in cardiac imaging and left ventricular (LV) ejection fraction (EF) is a key parameter for patient management. Recent advances in artificial intelligence (AI) have enabled fully automatic measurements of LV volumes and EF both during scanning and in stored recordings. The aim of this study was to evaluate the impact of implementing AI measurements on acquisition and processing time and test-retest reproducibility compared to standard clinical workflow, as well as to study the agreement with reference in large internal and external databases. Methods Fully automatic measurements of LV volumes and EF by a novel AI software were compared to manual measurements in the following clinical scenarios: 1) In real-time use during scanning of 50 consecutive patients, 2) in 40 subjects with repeated echocardiographic examinations and manual measurements by four readers, and 3) in large internal and external research databases of 1881 and 849 subjects, respectively. Results Real-time AI measurements significantly reduced the total acquisition and processing time by 77% (median 5.3 minutes, p &lt; 0.001) compared to standard clinical workflow. Test-retest reproducibility of AI measurements was superior in inter-observer scenarios and non-inferior in intra-observer scenarios. AI measurements showed good agreement with reference measurements both in real-time and in large research databases. Conclusions The software reduced the time taken to perform and volumetrically analyse routine echocardiograms without a decrease in accuracy compared to experts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Dewi, Dyah Ekashanti Octorina, Mohammed R. S. Sunoqrot, Gabriel Addio Nketiah, Elise Sandsmark, Guro F. Giskeødegård, Sverre Langørgen, Helena Bertilsson, Mattijs Elschot y Tone Frost Bathen. "The impact of pre-processing and disease characteristics on reproducibility of T2-weighted MRI radiomics features". Magnetic Resonance Materials in Physics, Biology and Medicine, 9 de agosto de 2023. http://dx.doi.org/10.1007/s10334-023-01112-z.

Texto completo
Resumen
Abstract Purpose To evaluate the reproducibility of radiomics features derived via different pre-processing settings from paired T2-weighted imaging (T2WI) prostate lesions acquired within a short interval, to select the setting that yields the highest number of reproducible features, and to evaluate the impact of disease characteristics (i.e., clinical variables) on features reproducibility. Materials and methods A dataset of 50 patients imaged using T2WI at 2 consecutive examinations was used. The dataset was pre-processed using 48 different settings. A total of 107 radiomics features were extracted from manual delineations of 74 lesions. The inter-scan reproducibility of each feature was measured using the intra-class correlation coefficient (ICC), with ICC values > 0.75 considered good. Statistical differences were assessed using Mann–Whitney U and Kruskal–Wallis tests. Results The pre-processing parameters strongly influenced the reproducibility of radiomics features of T2WI prostate lesions. The setting that yielded the highest number of features (25 features) with high reproducibility was the relative discretization with a fixed bin number of 64, no signal intensity normalization, and outlier filtering by excluding outliers. Disease characteristics did not significantly impact the reproducibility of radiomics features. Conclusion The reproducibility of T2WI radiomics features was significantly influenced by pre-processing parameters, but not by disease characteristics. The selected pre-processing setting yielded 25 reproducible features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Canu, Daniela, Chara Ioannou, Katarina Müller, Berthold Martin, Christian Fleischhaker, Monica Biscaldi, André Beauducel, Nikolaos Smyrnis, Ludger Tebartz van Elst y Christoph Klein. "Evidence towards a continuum of impairment across neurodevelopmental disorders from basic ocular-motor tasks". Scientific Reports 12, n.º 1 (3 de octubre de 2022). http://dx.doi.org/10.1038/s41598-022-19661-z.

Texto completo
Resumen
AbstractFindings of genetic overlap between Schizophrenia, Attention-Deficit/Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) contributed to a renewed conceptualization of these disorders as laying on a continuum based on aetiological, pathophysiological and neurodevelopmental features. Given that cognitive impairments are core to their pathophysiology, we compared patients with schizophrenia, ADHD, ASD, and controls on ocular-motor and manual-motor tasks, challenging crucial cognitive processes. Group comparisons revealed inhibition deficits common to all disorders, increased intra-subject variability in schizophrenia and, to a lesser extent, ADHD as well as slowed processing in schizophrenia. Patterns of deviancies from controls exhibited strong correlations, along with differences that posited schizophrenia as the most impaired group, followed by ASD and ADHD. While vector correlations point towards a common neurodevelopmental continuum of impairment, vector levels suggest differences in the severity of such impairment. These findings argue towards a dimensional approach to Neurodevelopmental Disorders’ pathophysiological mechanisms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Praveen, G. B., Anita Agrawal, Shrey Pareek y Amalin Prince. "Brain abnormality detection using template matching". Bio-Algorithms and Med-Systems 14, n.º 4 (8 de diciembre de 2018). http://dx.doi.org/10.1515/bams-2018-0029.

Texto completo
Resumen
Abstract Magnetic resonance imaging (MRI) is a widely used imaging modality to evaluate brain disorders. MRI generates huge volumes of data, which consist of a sequence of scans taken at different instances of time. As the presence of brain disorders has to be evaluated on all magnetic resonance (MR) sequences, manual brain disorder detection becomes a tedious process and is prone to inter- and intra-rater errors. A technique for detecting abnormalities in brain MRI using template matching is proposed. Bias filed correction is performed on volumetric scans using N4ITK filter, followed by volumetric registration. Normalized cross-correlation template matching is used for image registration taking into account, the rotation and scaling operations. A template of abnormality is selected which is then matched in the volumetric scans, if found, the corresponding image is retrieved. Post-processing of the retrieved images is performed by the thresholding operation; the coordinates and area of the abnormality are reported. The experiments are carried out on the glioma dataset obtained from Brain Tumor Segmentation Challenge 2013 database (BRATS 2013). Glioma dataset consisted of MR scans of 30 real glioma patients and 50 simulated glioma patients. NVIDIA Compute Unified Device Architecture framework is employed in this paper, and it is found that the detection speed using graphics processing unit is almost four times faster than using only central processing unit. The average Dice and Jaccard coefficients for a wide range of trials are found to be 0.91 and 0.83, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Backhaus, S. J., W. Staab, M. Steinmetz, C. O. Ritter, J. Lotz, G. Hasenfuss, J. T. Kowallick y A. Schuster. "P5284Fully automated quantification of biventricular volumes and function in cardiovascular magnetic resonance: applicability to clinical routine settings". European Heart Journal 40, Supplement_1 (1 de octubre de 2019). http://dx.doi.org/10.1093/eurheartj/ehz746.0255.

Texto completo
Resumen
Abstract Background Cardiovascular magnetic resonance (CMR) represents the clinical gold standard for the assessment of biventricular morphology and function. Since manual post-processing is time-consuming and prone to observer variability, efforts have been directed towards automated volumetric quantification. In this study, we sought to validate the accuracy of a novel approach providing fully automated quantification of biventricular volumes and function in a “real-world” clinical setting. Methods Three-hundred CMR examinations were randomly selected from the local data base. Fully automatic quantification of left ventricular (LV) mass, LV and right ventricular (RV) end-diastolic and systolic volumes (EDV/ESV), stroke volume (SV) and ejection fraction (EF) were performed overnight using commercially available software. Parameters were compared to manual assessments. Sub-group analyses were further performed according to image quality, scanner field strength, the presence of implanted aortic valves and repaired Tetralogy of Fallot (ToF). Results Biventricular automatic segmentation was feasible in all 300 cases. Overall agreement between fully automated and manually derived LV parameters was good (LV-EF: intra-class correlation coefficient [ICC] 0.95; bias −2.5% [SD 5.9%]), whilst RV agreement was lower (RV-EF: ICC 0.72; bias 5.8% [SD 9.6%]). Lowest agreement was observed in case of severely altered anatomy, e.g. marked RV dilation but normal LV dimensions in repaired ToF (LV parameters ICC 0.73–0.91; RV parameters ICC 0.41–0.94) and/or reduced image quality (LV parameters ICC 0.86–0.95; RV parameters ICC 0.56–0.91), which was more common on 3.0T than on 1.5T. Conclusions Fully automated assessment of biventricular morphology and function is robust and accurate in a clinical routine setting with good image quality and can be performed without any user interaction. However, in case of demanding anatomy (e.g. repaired ToF, severe LV hypertrophy) or reduced image quality, quality check and manual re-contouring is still required. Acknowledgement/Funding DZHK - German Centre for Cardiovascular Research
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Yao, Heming, Jonathan Gryak, Harm Derksen, Anil K. Attili, Brahmajee K. Nallamothu y Kayvan Najarian. "Abstract 16899: Fully-Automated Left Ventricle Segmentation Using a Dilated and Adversarial Deep Learning Architecture". Circulation 138, Suppl_1 (6 de noviembre de 2018). http://dx.doi.org/10.1161/circ.138.suppl_1.16899.

Texto completo
Resumen
Introduction: Cardiovascular magnetic resonance (CMR) imaging is currently the gold-standard to analyze cardiac morphology and evaluate global and regional left ventricle (LV) function. Quantitative parameters such as ejection fraction (EF) and LV mass are important indicators for diagnosis and associated with morbidity and mortality. In practice, the manual or semi-automatic delineations of LV contours for function evaluation are time-consuming and prone to intra- and inter-observer variability. The objective of this study is to develop an accurate and fully-automatic LV segmentation method for CMR slices in short axis using deep learning techniques. Methods: In this study, we trained a model using the LV2011 training dataset, which contains 100 cases of patients with myocardial infarction from multiple MR scanner system using balanced steady-state free precession. We developed a novel fully convolutional network (FCN) with dilated convolutions for pixel-wise prediction. An adversarial network is added to enforce high-order consistency. Furthermore, a guided active contour model is deployed as a post-processing method that considers the intensity homogeneities and boundary smoothness of segmented components. The similarity between the automatic segmentation from our model and the manual segmentation are evaluated using Jaccard index and Hausdorff distance (HD). Results: On the LV2011 test dataset, our model achieved a Jaccard index of 0.77 and HD of 4.12 mm, outperforming existing fully-automatic methods. Examples of segmentation results are shown in Figure 1. The clinical parameter estimations also have high reliability. The correlations between the end-diastolic volume, end-systolic volume, EF, and LV mass, estimated from automatic segmentation and that from manual segmentation on the test dataset, are 0.99, 0.99, 0.93, and 0.96, respectively. Conclusions: Our results demonstrate that the proposed method obtains accurate segmentation and is robust over cases of high variability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Weaver, Jayse Merle, Marissa DiPiero, Patrik Goncalves Rodrigues, Hassan Cordash, Richard J. Davidson, Elizabeth M. Planalp y Douglas C. Dean III. "Automated Motion Artifact Detection in Early Pediatric Diffusion MRI Using a Convolutional Neural Network". Imaging Neuroscience, 2023. http://dx.doi.org/10.1162/imag_a_00023.

Texto completo
Resumen
Abstract Diffusion MRI (dMRI) is a widely used method to investigate the microstructure of the brain. Quality control (QC) of dMRI data is an important processing step that is performed prior to analysis using models such as diffusion tensor imaging (DTI) or neurite orientation dispersion and density imaging (NODDI). When processing dMRI data from infants and young children, where intra-scan motion is common, the identification and removal of motion artifacts is of the utmost importance. Manual QC of dMRI data is time-consuming due to the large number of diffusion directions, expensive, and prone to subjective errors and observer variability. Prior techniques for automated dMRI QC have mostly been limited to adults or school-age children. Here, we propose a deep learning-based motion artifact detection tool for dMRI data acquired from infants and toddlers. The proposed framework uses a simple three-dimensional convolutional neural network (3DCNN) trained and tested on an early pediatric dataset of 2,276 dMRI volumes from 121 exams acquired at 1 month and 24 months of age. An average classification accuracy of 95% was achieved following 4-fold cross-validation. A second dataset with different acquisition parameters and ages ranging from 2-36 months (consisting of 2,349 dMRI volumes from 26 exams) was used to test network generalizability, achieving a 98% classification accuracy. Finally, to demonstrate the importance of motion artifact volume removal in a dMRI processing pipeline, the dMRI data was fit to the DTI and NODDI models and the parameter maps were compared with and without motion artifact removal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Conradt, Jan, Gregor Börner, Ángel López-Urrutia, Christian Möllmann y Marta Moyano. "Automated Plankton Classification With a Dynamic Optimization and Adaptation Cycle". Frontiers in Marine Science 9 (12 de abril de 2022). http://dx.doi.org/10.3389/fmars.2022.868420.

Texto completo
Resumen
With recent advances in Machine Learning techniques based on Deep Neural Networks (DNNs), automated plankton image classification is becoming increasingly popular within the marine ecological sciences. Yet, while the most advanced methods can achieve human-level performance on the classification of everyday images, plankton image data possess properties that frequently require a final manual validation step. On the one hand, this is due to morphological properties manifesting in high intra-class and low inter-class variability, and, on the other hand is due to spatial-temporal changes in the composition and structure of the plankton community. Composition changes enforce a frequent updating of the classifier model via training with new user-generated training datasets. Here, we present a Dynamic Optimization Cycle (DOC), a processing pipeline that systematizes and streamlines the model adaptation process via an automatic updating of the training dataset based on manual-validation results. We find that frequent adaptation using the DOC pipeline yields strong maintenance of performance with respect to precision, recall and prediction of community composition, compared to more limited adaptation schemes. The DOC is therefore particularly useful when analyzing plankton at novel locations or time periods, where community differences are likely to occur. In order to enable an easy implementation of the DOC pipeline, we provide an end-to-end application with graphical user interface, as well as an initial dataset of training images. The DOC pipeline thus allows for high-throughput plankton classification and quick and systematized model adaptation, thus providing the means for highly-accelerated plankton analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Chang, Amanda, Xiaodong Wu y Kan Liu. "Deep learning from latent spatiotemporal information of the heart: Identifying advanced bioimaging markers from echocardiograms". Biophysics Reviews 5, n.º 1 (1 de marzo de 2024). http://dx.doi.org/10.1063/5.0176850.

Texto completo
Resumen
A key strength of echocardiography lies in its integration of comprehensive spatiotemporal cardiac imaging data in real-time, to aid frontline or bedside patient risk stratification and management. Nonetheless, its acquisition, processing, and interpretation are known to all be subject to heterogeneity from its reliance on manual and subjective human tracings, which challenges workflow and protocol standardization and final interpretation accuracy. In the era of advanced computational power, utilization of machine learning algorithms for big data analytics in echocardiography promises reduction in cost, cognitive errors, and intra- and inter-observer variability. Novel spatiotemporal deep learning (DL) models allow the integration of temporal arm information based on unlabeled pixel echocardiographic data for convolution of an adaptive semantic spatiotemporal calibration to construct personalized 4D heart meshes, assess global and regional cardiac function, detect early valve pathology, and differentiate uncommon cardiovascular disorders. Meanwhile, data visualization on spatiotemporal DL prediction models helps extract latent temporal imaging features to develop advanced imaging biomarkers in early disease stages and advance our understanding of pathophysiology to support the development of personalized prevention or treatment strategies. Since portable echocardiograms have been increasingly used as point-of-care imaging tools to aid rural care delivery, the application of these new spatiotemporal DL techniques show the potentials in streamlining echocardiographic acquisition, processing, and data analysis to improve workflow standardization and efficiencies, and provide risk stratification and decision supporting tools in real-time, to prompt the building of new imaging diagnostic networks to enhance rural healthcare engagement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Daum, Thomas, Filippo Capezzone y Regina Birner. "Using smartphone app collected data to explore the link between mechanization and intra-household allocation of time in Zambia". Agriculture and Human Values, 13 de octubre de 2020. http://dx.doi.org/10.1007/s10460-020-10160-3.

Texto completo
Resumen
Abstract Digital tools may help to study socioeconomic aspects of agricultural development that are difficult to measure such as the effects of new policies and technologies on the intra-household allocation of time. As farm technologies target different crops and tasks, they can affect the time-use of men, women, boys, and girls differently. Development strategies that overlook such effects can have negative consequences for vulnerable household members. In this paper, the time-use patterns associated with different levels of agricultural mechanization during land preparation in smallholder farming households in Zambia were investigated. A novel data collection method was used: a pictorial smartphone application that allows real-time recording of time-use, which eliminates recall bias. Existing studies analyzing the intra-household allocation of resources often focus on adult males and females. This study paid particular attention to boys and girls as well as adults. The study addressed seasonal variations. Compositional data analysis was used to account for the co-dependence and sum constraint of time-use data. The study suggests a strong gender differentiation for land preparation activities among mechanized households; for households using manual labor, such differentiation was not found. There is some evidence that the surplus time associated with mechanization is used for off-farm and domestic work. The study cannot confirm concerns about negative second-round effects: mechanized land preparation is not associated with a higher workload for women and children during weeding and harvesting/processing. The study provides a proof-of-concept that smartphone applications can be used to collect socioeconomic data that are difficult to measure but of high relevance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía