Academic literature on the topic 'Deep Learning, Morphometry'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep Learning, Morphometry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep Learning, Morphometry"

1

Falk, Thorsten, Dominic Mai, Robert Bensch, Özgün Çiçek, Ahmed Abdulkadir, Yassine Marrakchi, Anton Böhm, et al. "U-Net: deep learning for cell counting, detection, and morphometry." Nature Methods 16, no. 1 (December 17, 2018): 67–70. http://dx.doi.org/10.1038/s41592-018-0261-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aruna Sri, Talluri, and Sangeeta Gupta. "Gender Prediction Based on Morphometry of Eyes Using Deep Learning Models." ECS Transactions 107, no. 1 (April 24, 2022): 6665–75. http://dx.doi.org/10.1149/10701.6665ecst.

Full text
Abstract:
In the modern days, the growth of online social networking websites and social media leads to an increasing adoption of computer-aided image recognition systems that automatically recognize and classify human subjects. One such familiar one is the anthropometric analysis of the human face that performs craniofacial plastic and reconstructive surgeries. To analyze the impact on facial anthropometrics, it is also essential to consider various factors such as age, gender, ethnicity, socioeconomic status, environment, and region. The repair and reconstruction of facial deformities to find the anatomical dimensions of the facial structures as used by plastic surgeons for their surgeries result from the physical or facial appearance of an individual. Gender classification plays an important role of identifying the person as either male or female using biometric images. The main goal is to interact with the system, so that gender differences are produced effectively and accurately. Hence, it is essential to select the features optimally to achieve better accuracy. Data mining or machine learning techniques can be useful to infer properties such as the gender or age of the people involved to analyze the human activities. Towards this end, the proposed work focuses on gender recognition thereby building a model to scan the eye image of a patient and determine if the gender of the patient is either male or female by applying deep learning methods. It is identified from the work that deep learning network yields a better performance for gender based classification based on the morphometry of eyes.
APA, Harvard, Vancouver, ISO, and other styles
3

Falk, Thorsten, Dominic Mai, Robert Bensch, Özgün Çiçek, Ahmed Abdulkadir, Yassine Marrakchi, Anton Böhm, et al. "Author Correction: U-Net: deep learning for cell counting, detection, and morphometry." Nature Methods 16, no. 4 (February 25, 2019): 351. http://dx.doi.org/10.1038/s41592-019-0356-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tiwari, Saumya, Kianoush Falahkheirkhah, Georgina Cheng, and Rohit Bhargava. "Colon Cancer Grading Using Infrared Spectroscopic Imaging-Based Deep Learning." Applied Spectroscopy 76, no. 4 (March 25, 2022): 475–84. http://dx.doi.org/10.1177/00037028221076170.

Full text
Abstract:
Tumor grade assessment is critical to the treatment of cancers. A pathologist typically evaluates grade by examining morphologic organization in tissue using hematoxylin and eosin (H&E) stained tissue sections. Fourier transform infrared spectroscopic (FT-IR) imaging provides an alternate view of tissue in which spatially specific molecular information from unstained tissue can be utilized. Here, we examine the potential of IR imaging for grading colon cancer in biopsy samples. We used a 148-patient cohort to develop a deep learning classifier to estimate the tumor grade using IR absorption. We demonstrate that FT-IR imaging can be a viable tool to determine colorectal cancer grades, which we validated on an independent cohort of surgical resections. This work demonstrates that harnessing molecular information from FT-IR imaging and coupling it with morphometry is a potential path to develop clinically relevant grade prediction models.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Jing-Jing, Qi-Jie Wei, Kang Li, Zhen-Ping Li, Tian Yu, Jian-Chun Zhao, Da-Yong Ding, Xi-Rong Li, Guang-Zhi Wang, and Hong Dai. "Three-dimensional diabetic macular edema thickness maps based on fluid segmentation and fovea detection using deep learning." International Journal of Ophthalmology 15, no. 3 (March 18, 2022): 495–501. http://dx.doi.org/10.18240/ijo.2022.03.19.

Full text
Abstract:
AIM: To explore a more accurate quantifying diagnosis method of diabetic macular edema (DME) by displaying detailed 3D morphometry beyond the gold-standard quantification indicator-central retinal thickness (CRT) and apply it in follow-up of DME patients. METHODS: Optical coherence tomography (OCT) scans of 229 eyes from 160 patients were collected. We manually annotated cystoid macular edema (CME), subretinal fluid (SRF) and fovea as ground truths. Deep convolution neural networks (DCNNs) were constructed including U-Net, sASPP, HRNetV2-W48, and HRNetV2-W48+Object-Contextual Representation (OCR) for fluid (CME+SRF) segmentation and fovea detection respectively, based on which the thickness maps of CME, SRF and retina were generated and divided by Early Treatment Diabetic Retinopathy Study (ETDRS) grid. RESULTS: In fluid segmentation, with the best DCNN constructed and loss function, the dice similarity coefficients (DSC) of segmentation reached 0.78 (CME), 0.82 (SRF), and 0.95 (retina). In fovea detection, the average deviation between the predicted fovea and the ground truth reached 145.7±117.8 μm. The generated macular edema thickness maps are able to discover center-involved DME by intuitive morphometry and fluid volume, which is ignored by the traditional definition of CRT>250 μm. Thickness maps could also help to discover fluid above or below the fovea center ignored or underestimated by a single OCT B-scan. CONCLUSION: Compared to the traditional unidimensional indicator-CRT, 3D macular edema thickness maps are able to display more intuitive morphometry and detailed statistics of DME, supporting more accurate diagnoses and follow-up of DME patients.
APA, Harvard, Vancouver, ISO, and other styles
6

Seifert, Jan, Hendrik von Eysmondt, Madhumita Chatterjee, Meinrad Gawaz, and Tilman E. Schäffer. "Effect of Oxidized LDL on Platelet Shape, Spreading, and Migration Investigated with Deep Learning Platelet Morphometry." Cells 10, no. 11 (October 28, 2021): 2932. http://dx.doi.org/10.3390/cells10112932.

Full text
Abstract:
Platelets are functionally versatile blood cells involved in thrombosis, hemostasis, atherosclerosis, and immune response. Platelet interaction with the immediate microenvironment in blood, vasculature, and tissues alters platelet morphology. The quantification of platelet morphodynamics by geometrical parameters (morphometry) can provide important insights into how platelets sense and respond to stimulatory cues in their vicinity. However, the extraction of platelet shapes from phase contrast microscopy images by conventional image processing is difficult. Here, we used a convolutional neural network (CNN) to develop a deep-learning-based approach for the unbiased extraction of information on platelet morphodynamics by phase contrast microscopy. We then investigated the effect of normal and oxidized low-density lipoproteins (LDL, oxLDL) on platelet morphodynamics, spreading, and haptotactic migration. Exposure of platelets to oxLDL led to a decreased spreading area and rate on fibrinogen, accompanied by increased formation of filopodia and impaired formation of lamellipodia. Haptotactic platelet migration was affected by both LDL and oxLDL in terms of decreased migration velocity and reduced directional persistence. Our results demonstrate the use of deep learning in investigating platelet morphodynamics and reveal differential effects of LDL and oxLDL on platelet morphology and platelet–matrix interaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Magness, Alastair, Katey Enfield, Mihaela Angelova, Emma Colliver, Emer Daly, Kristiana Grigoriadis, Claudia Lee, et al. "Abstract 1926: Machine learning-enhanced image and spatial analytic pipelines for imaging mass cytometry applied to the TRACERx non-small cell lung cancer study." Cancer Research 82, no. 12_Supplement (June 15, 2022): 1926. http://dx.doi.org/10.1158/1538-7445.am2022-1926.

Full text
Abstract:
Abstract Introduction: High dimensional imaging approaches such as imaging mass cytometry (IMC) are becoming widely used in cancer research. Such methods allow simultaneous observation of many cell types and their functional states and can provide valuable spatial information on cancer disease states when applied to clinical tissue samples. For example, in-situ immune and tumor cell interactions can be interrogated in their spatial context within the tumor microenvironment (TME). Analysis methods for the resultant complex data are not well formalized, and bespoke methods are usually required to fully capitalize on the underlying richness of information made available by IMC. Deep learning [DL] approaches, while highly accurate for other imaging modalities, have been slow to be adopted in IMC, as public resources for deep learning tasks in IMC are not abundant. Methods: We developed multiple DL and ML-based analysis pipelines for the following tasks in IMC data processing: [1] nucleus and necrotic tissue segmentation, [2] quantitative nuclear and cellular morphometry, [3] identification of cell type-specific niches. We applied these protocols to images and derived single cell spatial data from the TRACERx IMC cohort (n=81 non-small cell lung cancer patients, 561 images). Results: [1] We created a 120 image, 46,000+ labelled nucleus segmentation dataset for IMC data with representative images from lung adenocarcinoma, squamous cell carcinoma and other tissues. We achieved state-of-the-art performance in nuclear instance segmentation using a custom U-net++ neural network architecture trained using this dataset, which we benchmarked against traditional image processing methods, as well as publicly available deep learning architectures. Subsequently, we exploited transfer learning to retrain this model on a restricted dataset of labelled necrotic domains, which produced predictions in good agreement with independent pathologist assessment. [2] We developed an IMC morphometry pipeline utilizing ML-informed partitions of nuclear and cellular shape descriptors through which we performed cell-type specific morphometric characterization of all mapped cells in the non-small cell lung cancer TME, and which enabled a comparative analysis of the morphometries of each distinct cellular phenotype. [3] We established a high throughput density-based spatial clustering pipeline capable of identifying locally enriched niches of a given cell type of interest, as well as probing the composition and phenotypes of other cells within these niches. Conclusions: These approaches enhanced the quality as well as the breadth of spatial information derivable from TRACERx IMC data. Applying such tools to other clinical and pre-clinical datasets can improve our understanding of the spatial organization of cells both in non-small cell lung cancer and other cancer types. Citation Format: Alastair Magness, Katey Enfield, Mihaela Angelova, Emma Colliver, Emer Daly, Kristiana Grigoriadis, Claudia Lee, Oriol Pich, Philip Hobson, Dina Levi, Takahiro Karasaki, David Moore, Julian Downward, Erik Sahai, Mariam Jamal-Hanjani, Charles Swanton, TRACERx Consortium. Machine learning-enhanced image and spatial analytic pipelines for imaging mass cytometry applied to the TRACERx non-small cell lung cancer study [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1926.
APA, Harvard, Vancouver, ISO, and other styles
8

Vyškovský, Roman, Daniel Schwarz, Vendula Churová, and Tomáš Kašpárek. "Structural MRI-Based Schizophrenia Classification Using Autoencoders and 3D Convolutional Neural Networks in Combination with Various Pre-Processing Techniques." Brain Sciences 12, no. 5 (May 9, 2022): 615. http://dx.doi.org/10.3390/brainsci12050615.

Full text
Abstract:
Schizophrenia is a severe neuropsychiatric disease whose diagnosis, unfortunately, lacks an objective diagnostic tool supporting a thorough psychiatric examination of the patient. We took advantage of today’s computational abilities, structural magnetic resonance imaging, and modern machine learning methods, such as stacked autoencoders (SAE) and 3D convolutional neural networks (3D CNN), to teach them to classify 52 patients with schizophrenia and 52 healthy controls. The main aim of this study was to explore whether complex feature extraction methods can help improve the accuracy of deep learning-based classifiers compared to minimally preprocessed data. Our experiments employed three commonly used preprocessing steps to extract three different feature types. They included voxel-based morphometry, deformation-based morphometry, and simple spatial normalization of brain tissue. In addition to classifier models, features and their combination, other model parameters such as network depth, number of neurons, number of convolutional filters, and input data size were also investigated. Autoencoders were trained on feature pools of 1000 and 5000 voxels selected by Mann-Whitney tests, and 3D CNNs were trained on whole images. The most successful model architecture (autoencoders) achieved the highest average accuracy of 69.62% (sensitivity 68.85%, specificity 70.38%). The results of all experiments were statistically compared (the Mann-Whitney test). In conclusion, SAE outperformed 3D CNN, while preprocessing using VBM helped SAE improve the results.
APA, Harvard, Vancouver, ISO, and other styles
9

Toshkhujaev, Saidjalol, Kun Ho Lee, Kyu Yeong Choi, Jang Jae Lee, Goo-Rak Kwon, Yubraj Gupta, and Ramesh Kumar Lama. "Classification of Alzheimer’s Disease and Mild Cognitive Impairment Based on Cortical and Subcortical Features from MRI T1 Brain Images Utilizing Four Different Types of Datasets." Journal of Healthcare Engineering 2020 (September 1, 2020): 1–14. http://dx.doi.org/10.1155/2020/3743171.

Full text
Abstract:
Alzheimer’s disease (AD) is one of the most common neurodegenerative illnesses (dementia) among the elderly. Recently, researchers have developed a new method for the instinctive analysis of AD based on machine learning and its subfield, deep learning. Recent state-of-the-art techniques consider multimodal diagnosis, which has been shown to achieve high accuracy compared to a unimodal prognosis. Furthermore, many studies have used structural magnetic resonance imaging (MRI) to measure brain volumes and the volume of subregions, as well as to search for diffuse changes in white/gray matter in the brain. In this study, T1-weighted structural MRI was used for the early classification of AD. MRI results in high-intensity visible features, making preprocessing and segmentation easy. To use this image modality, we acquired four types of datasets from each dataset’s server. In this work, we downloaded 326 subjects from the National Research Center for Dementia homepage, 123 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) homepage, 121 subjects from the Alzheimer’s Disease Repository Without Borders homepage, and 131 subjects from the National Alzheimer’s Coordinating Center homepage. In our experiment, we used the multiatlas label propagation with expectation–maximization-based refinement segmentation method. We segmented the images into 138 anatomical morphometry images (in which 40 features belonged to subcortical volumes and the remaining 98 features belonged to cortical thickness). The entire dataset was split into a 70 : 30 (training and testing) ratio before classifying the data. A principal component analysis was used for dimensionality reduction. Then, the support vector machine radial basis function classifier was used for classification between two groups—AD versus health control (HC) and early mild cognitive impairment (MCI) (EMCI) versus late MCI (LMCI). The proposed method performed very well for all four types of dataset. For instance, for the AD versus HC group, the classifier achieved an area under curve (AUC) of more than 89% for each dataset. For the EMCI versus LMCI group, the classifier achieved an AUC of more than 80% for every dataset. Moreover, we also calculated Cohen kappa and Jaccard index statistical values for all datasets to evaluate the classification reliability. Finally, we compared our results with those of recently published state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Cui, Hailun, Yingying Zhang, Yijie Zhao, Luis Manssuer, Chencheng Zhang, Dianyou Li, Wenjuan Liu, Bomin Sun, and Valerie Voon. "17 Neuromodification of refractory obsessive-compulsive disorder (OCD): evidence from cognitive, structural and functional remodelling of anterior capsulotomy." Journal of Neurology, Neurosurgery & Psychiatry 93, no. 12 (November 14, 2022): e3.9. http://dx.doi.org/10.1136/jnnp-2022-bnpa.17.

Full text
Abstract:
ObjectivesConverging evidence supports the effectiveness of anterior capsulotomy in the management of selected patients with refractory obsessive-compulsive disorder (OCD). Less is understood about the post-operative neural changes that contribute to clinical improvements or impairments that might occur which potentially could be avoided. The objective of this study is to identify the neuropsychological changes underlying the symptomatic relief in capsulotomy using an affective task-based fMRI study, and to discern potential post-operative cognitive changes associated with the surgery itself.MethodsA task-based fMRI study using a modified aversive monetary incentive delay paradigm with negative imagery (separated into two phases: anticipation and feedback) and subsequent expectancy violation was applied to study emotional regulation, with specific focus on the functional changes along the fronto-striatal pathway. Selected tests from the Cambridge Neuropsychological Automated Test Battery (CANTAB) covering attention, memory and executive function known to be impaired in OCD were administered to assess the cognitive function in three groups of age, gender matched participants: OCD patients (OCD), OCD capsulotomy patients (at least six months after surgery, CAP), and healthy controls (HC). Voxel-based morphometry (VBM) was applied to detect whole-brain grey matter volume differences between HC, OCD and CAP, and its association with cognitive deficits.ResultsCAP showed lower OCD severity (Yale Brown Obsessive Compulsive Score), and better activities of daily living and quality of life scores compared to OCD controls with no differences in depression or anxiety scores.Task-based fMRI revealed decreased nucleus accumbens (NAc) activity during aversive anticipation in CAP, which correlated with the symptomatic ratings after surgery. Whole-brain analyses during the feedback phase found interaction effects in the left pregenual anterior cingulate cortex (pgACC) and the left inferior frontal cortex (IFG), where CAP showed lower activity towards the aversive outcome and towards the expected but missing aversive image during the expectancy violation phase, with both positively correlated with symptomatic ratings. Attenuated functional connectivity between seed NAc and pgACC was found in CAP as expected during aversive versus neutral anticipation (aversive>neutral), suggesting a surgical impact of capsulotomy through downregulating the functional communication along the fronto-striatal pathway.Cognitive testing focusing on the differences between OCD and CAP or the potential effect of capsulotomy showed greater impairments in CAP in paired associative learning (PAL). All other cognitive measures were no different from OCD controls. CAP exhibited overall reduction of grey matter volume (GMV) in the ventral striatum, thalamus and left inferior frontal gyrus (IFG) relative to OCD controls. The total PAL error in CAP was associated with lower GMV in the left inferior frontal gyrus.ConclusionsWe show improvements in OCD severity, quality of life and disability post-capsulotomy. Potential impairments in cognitive status were limited to associative learning correlating with lower left inferior frontal cortex volumes which may reflect underlying severity or be related to capsulotomy effects. Our findings suggest targeting connectivity between NAc and pgACC, similar to the optimal target for deep brain stimulation in OCD, with post-operative effects modifying aversive processing. In contrast, potentially preserving tracts to the left inferior frontal cortex might be indicated. Our study contributes to the literature of anterior capsulotomy as an effective and well-tolerated treatment option for selected patients with refractory OCD and highlights overlaps between deep brain stimulation and capsulotomy in optimal targeting.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Deep Learning, Morphometry"

1

Le, Van Linh. "Automatic landmarking for 2D biological images : image processing with and without deep learning methods." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0238.

Full text
Abstract:
Les points de repère sont présentés dans les applications de différents domaines tels que le biomédical ou le biologique. C’est également l’un des types de données qui ont été utilisés dans différentes analyses, par exemple, ils ne sont pas seulement utilisés pour mesurer la forme de l’objet, mais également pour déterminer la similarité entre deux objets. En biologie, les repères sont utilisés pour analyser les variations inter-organismes. Cependant, l’offre de repères est très lourde et le plus souvent, ils sont fournis manuellement. Ces dernières années, plusieurs méthodes ont été proposées pour prédire automatiquement les points de repère, mais la dureté existe, car elles se sont concentrées sur des données spécifiques. Cette thèse porte sur la détermination automatique de points de repère sur des images biologiques, plus spécifiquement sur d’images 2D des coléoptères. Dans le cadre de nos recherches, nous avons collaboré avec des biologistes pour créer un ensemble de données comprenantles images de 293 coléoptères. Pour chaque coléoptère de cette donnée, 5 images corre-spondent à 5 parties prises en compte, par exemple tête, élytre, pronotum, mandibule gauche et droite. Avec chaque image, un ensemble de points de repère a été proposé manuellement par les biologistes. La première étape, nous avons apporté une méthode qui a été appliquée sur les ailes de mouche, à appliquer sur notre jeu de données dans le but de tester la pertinence des techniques de traitement d’image sur notre problème. Deuxièmement, nous avons développé une méthode en plusieurs étapes pour fournir automatiquement les points de repère sur les images. Ces deux premières étapes ont été effectuées sur les images de la mandibule qui sont considérées comme évidentes pour l’utilisation des méthodes de traitement d’images. Troisièmement, nous avons continué à considérer d’autres parties complexes restantes de coléoptères. En conséquence, nous avons utilisé l’aide de Deep Learning. Nous avons conçu un nouveau modèle de Convolutional Neural Network, nommé EB-Net, pour prédire les points de repère sur les images restantes. De plus, nous avons proposé une nouvelle procédurepour augmenter le nombre d’images dans notre jeu de données, ce qui est considéré comme notre limite à appliquer Deep Learning. Enfin, pour améliorer la qualité des coordonnées prédites, nous avons utilisé Transfer Learning, une autre technique de Deep Learning. Pour ce faire, nous avons formé EB-Net sur les points clés publics du visage. Ensuite, ils ont été transférés pour affiner les images de coléoptère. Les résultats obtenus ont été discutés avec les biologistes et ils ont confirmé que la qualité des repéres prédits est suffisamment bonne sur la plane statistique pour remplacer les repères manuels pour la plupart des analyses de morphométrie différentes
Landmarks are presented in the applications of different domains such as biomedical or biological. It is also one of the data types which have been usedin different analysis, for example, they are not only used for measuring the form of the object, but also for determining the similarity between two objects. In biology, landmarks are used to analyze the inter-organisms variations, however the supply of landmarks is very heavy and most often they are provided manually. In recent years, several methods have been proposed to automatically predict landmarks, but it is existing the hardness because these methods focused on the specific data. This thesis focuses on automatic determination of landmarks on biological images, more specifically on two-dimensional images of beetles. In our research, we have collaborated with biologists to build a dataset including the images of 293 beetles. For each beetle in this dataset, 5 images correspond to 5 parts have been taken into account, e.g., head, body, pronotum, left and right mandible. Along with each image, a set of landmarks has been manually proposed by biologists. First step, we have brought a method whichwas applied on fly wings, to apply on our dataset with the aim to test the suitability of image processing techniques on our problem. Secondly, we have developed a method consisting of several stages to automatically provide the landmarks on the images.These two first steps have been done on the mandible images which are considered as obvious to use the image processing methods. Thirdly, we have continued to consider other complex remaining parts of beetles. Accordingly, we have used the help of Deep Learning. We have designed a new model of Convolutional Neural Network, named EB-Net, to predict the landmarks on remaining images. In addition, we have proposed a new procedure to augment the number of images in our dataset, which is seen as our limitation to apply deep learning. Finally, to improve the quality of predicted coordinates, we have employed Transfer Learning, another technique of Deep Learning. In order to do that, we trained EB-Net on a public facial keypoints. Then, they were transferred to fine-tuning on beetle’s images. The obtained results have been discussed with biologists, and they have confirmed that the quality of predicted landmarks is statistically good enough to replace the manual landmarks for most of the different morphometry analysis
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Deep Learning, Morphometry"

1

Liu, Chi, Yue Huang, Ligong Han, John A. Ozolek, and Gustavo K. Rohde. "Hierarchical Feature Extraction for Nuclear Morphometry-Based Cancer Diagnosis." In Deep Learning and Data Labeling for Medical Applications, 219–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46976-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schnurr, Alena-Kathrin, Philipp Eisele, Christina Rossmanith, Stefan Hoffmann, Johannes Gregori, Andreas Dabringhaus, Matthias Kraemer, Raimar Kern, Achim Gass, and Frank G. Zöllner. "Deep Voxel-Guided Morphometry (VGM): Learning Regional Brain Changes in Serial MRI." In Machine Learning in Clinical Neuroimaging and Radiogenomics in Neuro-oncology, 159–68. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66843-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pinho, Marco C., Kaustav Bera, Niha Beig, and Pallavi Tiwari. "MRI Morphometry in Brain Tumors: Challenges and Opportunities in Expert, Radiomic, and Deep-Learning-Based Analyses." In Brain Tumors, 323–68. New York, NY: Springer US, 2020. http://dx.doi.org/10.1007/978-1-0716-0856-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep Learning, Morphometry"

1

Zhang, Wen, and Yalin Wang. "Deep Multimodal Brain Network Learning for Joint Analysis of Structural Morphometry and Functional Connectivity." In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, 2020. http://dx.doi.org/10.1109/isbi45749.2020.9098624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zeng, Ling-Li, Christopher R. K. Ching, Zvart Abaryan, Sophia I. Thomopoulos, Kai Gao, Alyssa H. Zhu, Anjanibhargavi Ragothaman, et al. "Deep transfer learning of brain shape morphometry predicts Body Mass Index (BMI) in the UK Biobank." In 16th International Symposium on Medical Information Processing and Analysis, edited by Jorge Brieva, Natasha Lepore, Eduardo Romero Castro, and Marius G. Linguraru. SPIE, 2020. http://dx.doi.org/10.1117/12.2577074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rawat, Rishi R., Daniel Ruderman, David B. Agus, and Paul Macklin. "Abstract 540: Deep learning to determine breast cancer estrogen receptor status from nuclear morphometric features in H&E images." In Proceedings: AACR Annual Meeting 2017; April 1-5, 2017; Washington, DC. American Association for Cancer Research, 2017. http://dx.doi.org/10.1158/1538-7445.am2017-540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography