Journal articles on the topic 'Deep Learning, Morphometry'

To see the other types of publications on this topic, follow the link: Deep Learning, Morphometry.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep Learning, Morphometry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Falk, Thorsten, Dominic Mai, Robert Bensch, Özgün Çiçek, Ahmed Abdulkadir, Yassine Marrakchi, Anton Böhm, et al. "U-Net: deep learning for cell counting, detection, and morphometry." Nature Methods 16, no. 1 (December 17, 2018): 67–70. http://dx.doi.org/10.1038/s41592-018-0261-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aruna Sri, Talluri, and Sangeeta Gupta. "Gender Prediction Based on Morphometry of Eyes Using Deep Learning Models." ECS Transactions 107, no. 1 (April 24, 2022): 6665–75. http://dx.doi.org/10.1149/10701.6665ecst.

Full text
Abstract:
In the modern days, the growth of online social networking websites and social media leads to an increasing adoption of computer-aided image recognition systems that automatically recognize and classify human subjects. One such familiar one is the anthropometric analysis of the human face that performs craniofacial plastic and reconstructive surgeries. To analyze the impact on facial anthropometrics, it is also essential to consider various factors such as age, gender, ethnicity, socioeconomic status, environment, and region. The repair and reconstruction of facial deformities to find the anatomical dimensions of the facial structures as used by plastic surgeons for their surgeries result from the physical or facial appearance of an individual. Gender classification plays an important role of identifying the person as either male or female using biometric images. The main goal is to interact with the system, so that gender differences are produced effectively and accurately. Hence, it is essential to select the features optimally to achieve better accuracy. Data mining or machine learning techniques can be useful to infer properties such as the gender or age of the people involved to analyze the human activities. Towards this end, the proposed work focuses on gender recognition thereby building a model to scan the eye image of a patient and determine if the gender of the patient is either male or female by applying deep learning methods. It is identified from the work that deep learning network yields a better performance for gender based classification based on the morphometry of eyes.
APA, Harvard, Vancouver, ISO, and other styles
3

Falk, Thorsten, Dominic Mai, Robert Bensch, Özgün Çiçek, Ahmed Abdulkadir, Yassine Marrakchi, Anton Böhm, et al. "Author Correction: U-Net: deep learning for cell counting, detection, and morphometry." Nature Methods 16, no. 4 (February 25, 2019): 351. http://dx.doi.org/10.1038/s41592-019-0356-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tiwari, Saumya, Kianoush Falahkheirkhah, Georgina Cheng, and Rohit Bhargava. "Colon Cancer Grading Using Infrared Spectroscopic Imaging-Based Deep Learning." Applied Spectroscopy 76, no. 4 (March 25, 2022): 475–84. http://dx.doi.org/10.1177/00037028221076170.

Full text
Abstract:
Tumor grade assessment is critical to the treatment of cancers. A pathologist typically evaluates grade by examining morphologic organization in tissue using hematoxylin and eosin (H&E) stained tissue sections. Fourier transform infrared spectroscopic (FT-IR) imaging provides an alternate view of tissue in which spatially specific molecular information from unstained tissue can be utilized. Here, we examine the potential of IR imaging for grading colon cancer in biopsy samples. We used a 148-patient cohort to develop a deep learning classifier to estimate the tumor grade using IR absorption. We demonstrate that FT-IR imaging can be a viable tool to determine colorectal cancer grades, which we validated on an independent cohort of surgical resections. This work demonstrates that harnessing molecular information from FT-IR imaging and coupling it with morphometry is a potential path to develop clinically relevant grade prediction models.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Jing-Jing, Qi-Jie Wei, Kang Li, Zhen-Ping Li, Tian Yu, Jian-Chun Zhao, Da-Yong Ding, Xi-Rong Li, Guang-Zhi Wang, and Hong Dai. "Three-dimensional diabetic macular edema thickness maps based on fluid segmentation and fovea detection using deep learning." International Journal of Ophthalmology 15, no. 3 (March 18, 2022): 495–501. http://dx.doi.org/10.18240/ijo.2022.03.19.

Full text
Abstract:
AIM: To explore a more accurate quantifying diagnosis method of diabetic macular edema (DME) by displaying detailed 3D morphometry beyond the gold-standard quantification indicator-central retinal thickness (CRT) and apply it in follow-up of DME patients. METHODS: Optical coherence tomography (OCT) scans of 229 eyes from 160 patients were collected. We manually annotated cystoid macular edema (CME), subretinal fluid (SRF) and fovea as ground truths. Deep convolution neural networks (DCNNs) were constructed including U-Net, sASPP, HRNetV2-W48, and HRNetV2-W48+Object-Contextual Representation (OCR) for fluid (CME+SRF) segmentation and fovea detection respectively, based on which the thickness maps of CME, SRF and retina were generated and divided by Early Treatment Diabetic Retinopathy Study (ETDRS) grid. RESULTS: In fluid segmentation, with the best DCNN constructed and loss function, the dice similarity coefficients (DSC) of segmentation reached 0.78 (CME), 0.82 (SRF), and 0.95 (retina). In fovea detection, the average deviation between the predicted fovea and the ground truth reached 145.7±117.8 μm. The generated macular edema thickness maps are able to discover center-involved DME by intuitive morphometry and fluid volume, which is ignored by the traditional definition of CRT>250 μm. Thickness maps could also help to discover fluid above or below the fovea center ignored or underestimated by a single OCT B-scan. CONCLUSION: Compared to the traditional unidimensional indicator-CRT, 3D macular edema thickness maps are able to display more intuitive morphometry and detailed statistics of DME, supporting more accurate diagnoses and follow-up of DME patients.
APA, Harvard, Vancouver, ISO, and other styles
6

Seifert, Jan, Hendrik von Eysmondt, Madhumita Chatterjee, Meinrad Gawaz, and Tilman E. Schäffer. "Effect of Oxidized LDL on Platelet Shape, Spreading, and Migration Investigated with Deep Learning Platelet Morphometry." Cells 10, no. 11 (October 28, 2021): 2932. http://dx.doi.org/10.3390/cells10112932.

Full text
Abstract:
Platelets are functionally versatile blood cells involved in thrombosis, hemostasis, atherosclerosis, and immune response. Platelet interaction with the immediate microenvironment in blood, vasculature, and tissues alters platelet morphology. The quantification of platelet morphodynamics by geometrical parameters (morphometry) can provide important insights into how platelets sense and respond to stimulatory cues in their vicinity. However, the extraction of platelet shapes from phase contrast microscopy images by conventional image processing is difficult. Here, we used a convolutional neural network (CNN) to develop a deep-learning-based approach for the unbiased extraction of information on platelet morphodynamics by phase contrast microscopy. We then investigated the effect of normal and oxidized low-density lipoproteins (LDL, oxLDL) on platelet morphodynamics, spreading, and haptotactic migration. Exposure of platelets to oxLDL led to a decreased spreading area and rate on fibrinogen, accompanied by increased formation of filopodia and impaired formation of lamellipodia. Haptotactic platelet migration was affected by both LDL and oxLDL in terms of decreased migration velocity and reduced directional persistence. Our results demonstrate the use of deep learning in investigating platelet morphodynamics and reveal differential effects of LDL and oxLDL on platelet morphology and platelet–matrix interaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Magness, Alastair, Katey Enfield, Mihaela Angelova, Emma Colliver, Emer Daly, Kristiana Grigoriadis, Claudia Lee, et al. "Abstract 1926: Machine learning-enhanced image and spatial analytic pipelines for imaging mass cytometry applied to the TRACERx non-small cell lung cancer study." Cancer Research 82, no. 12_Supplement (June 15, 2022): 1926. http://dx.doi.org/10.1158/1538-7445.am2022-1926.

Full text
Abstract:
Abstract Introduction: High dimensional imaging approaches such as imaging mass cytometry (IMC) are becoming widely used in cancer research. Such methods allow simultaneous observation of many cell types and their functional states and can provide valuable spatial information on cancer disease states when applied to clinical tissue samples. For example, in-situ immune and tumor cell interactions can be interrogated in their spatial context within the tumor microenvironment (TME). Analysis methods for the resultant complex data are not well formalized, and bespoke methods are usually required to fully capitalize on the underlying richness of information made available by IMC. Deep learning [DL] approaches, while highly accurate for other imaging modalities, have been slow to be adopted in IMC, as public resources for deep learning tasks in IMC are not abundant. Methods: We developed multiple DL and ML-based analysis pipelines for the following tasks in IMC data processing: [1] nucleus and necrotic tissue segmentation, [2] quantitative nuclear and cellular morphometry, [3] identification of cell type-specific niches. We applied these protocols to images and derived single cell spatial data from the TRACERx IMC cohort (n=81 non-small cell lung cancer patients, 561 images). Results: [1] We created a 120 image, 46,000+ labelled nucleus segmentation dataset for IMC data with representative images from lung adenocarcinoma, squamous cell carcinoma and other tissues. We achieved state-of-the-art performance in nuclear instance segmentation using a custom U-net++ neural network architecture trained using this dataset, which we benchmarked against traditional image processing methods, as well as publicly available deep learning architectures. Subsequently, we exploited transfer learning to retrain this model on a restricted dataset of labelled necrotic domains, which produced predictions in good agreement with independent pathologist assessment. [2] We developed an IMC morphometry pipeline utilizing ML-informed partitions of nuclear and cellular shape descriptors through which we performed cell-type specific morphometric characterization of all mapped cells in the non-small cell lung cancer TME, and which enabled a comparative analysis of the morphometries of each distinct cellular phenotype. [3] We established a high throughput density-based spatial clustering pipeline capable of identifying locally enriched niches of a given cell type of interest, as well as probing the composition and phenotypes of other cells within these niches. Conclusions: These approaches enhanced the quality as well as the breadth of spatial information derivable from TRACERx IMC data. Applying such tools to other clinical and pre-clinical datasets can improve our understanding of the spatial organization of cells both in non-small cell lung cancer and other cancer types. Citation Format: Alastair Magness, Katey Enfield, Mihaela Angelova, Emma Colliver, Emer Daly, Kristiana Grigoriadis, Claudia Lee, Oriol Pich, Philip Hobson, Dina Levi, Takahiro Karasaki, David Moore, Julian Downward, Erik Sahai, Mariam Jamal-Hanjani, Charles Swanton, TRACERx Consortium. Machine learning-enhanced image and spatial analytic pipelines for imaging mass cytometry applied to the TRACERx non-small cell lung cancer study [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1926.
APA, Harvard, Vancouver, ISO, and other styles
8

Vyškovský, Roman, Daniel Schwarz, Vendula Churová, and Tomáš Kašpárek. "Structural MRI-Based Schizophrenia Classification Using Autoencoders and 3D Convolutional Neural Networks in Combination with Various Pre-Processing Techniques." Brain Sciences 12, no. 5 (May 9, 2022): 615. http://dx.doi.org/10.3390/brainsci12050615.

Full text
Abstract:
Schizophrenia is a severe neuropsychiatric disease whose diagnosis, unfortunately, lacks an objective diagnostic tool supporting a thorough psychiatric examination of the patient. We took advantage of today’s computational abilities, structural magnetic resonance imaging, and modern machine learning methods, such as stacked autoencoders (SAE) and 3D convolutional neural networks (3D CNN), to teach them to classify 52 patients with schizophrenia and 52 healthy controls. The main aim of this study was to explore whether complex feature extraction methods can help improve the accuracy of deep learning-based classifiers compared to minimally preprocessed data. Our experiments employed three commonly used preprocessing steps to extract three different feature types. They included voxel-based morphometry, deformation-based morphometry, and simple spatial normalization of brain tissue. In addition to classifier models, features and their combination, other model parameters such as network depth, number of neurons, number of convolutional filters, and input data size were also investigated. Autoencoders were trained on feature pools of 1000 and 5000 voxels selected by Mann-Whitney tests, and 3D CNNs were trained on whole images. The most successful model architecture (autoencoders) achieved the highest average accuracy of 69.62% (sensitivity 68.85%, specificity 70.38%). The results of all experiments were statistically compared (the Mann-Whitney test). In conclusion, SAE outperformed 3D CNN, while preprocessing using VBM helped SAE improve the results.
APA, Harvard, Vancouver, ISO, and other styles
9

Toshkhujaev, Saidjalol, Kun Ho Lee, Kyu Yeong Choi, Jang Jae Lee, Goo-Rak Kwon, Yubraj Gupta, and Ramesh Kumar Lama. "Classification of Alzheimer’s Disease and Mild Cognitive Impairment Based on Cortical and Subcortical Features from MRI T1 Brain Images Utilizing Four Different Types of Datasets." Journal of Healthcare Engineering 2020 (September 1, 2020): 1–14. http://dx.doi.org/10.1155/2020/3743171.

Full text
Abstract:
Alzheimer’s disease (AD) is one of the most common neurodegenerative illnesses (dementia) among the elderly. Recently, researchers have developed a new method for the instinctive analysis of AD based on machine learning and its subfield, deep learning. Recent state-of-the-art techniques consider multimodal diagnosis, which has been shown to achieve high accuracy compared to a unimodal prognosis. Furthermore, many studies have used structural magnetic resonance imaging (MRI) to measure brain volumes and the volume of subregions, as well as to search for diffuse changes in white/gray matter in the brain. In this study, T1-weighted structural MRI was used for the early classification of AD. MRI results in high-intensity visible features, making preprocessing and segmentation easy. To use this image modality, we acquired four types of datasets from each dataset’s server. In this work, we downloaded 326 subjects from the National Research Center for Dementia homepage, 123 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) homepage, 121 subjects from the Alzheimer’s Disease Repository Without Borders homepage, and 131 subjects from the National Alzheimer’s Coordinating Center homepage. In our experiment, we used the multiatlas label propagation with expectation–maximization-based refinement segmentation method. We segmented the images into 138 anatomical morphometry images (in which 40 features belonged to subcortical volumes and the remaining 98 features belonged to cortical thickness). The entire dataset was split into a 70 : 30 (training and testing) ratio before classifying the data. A principal component analysis was used for dimensionality reduction. Then, the support vector machine radial basis function classifier was used for classification between two groups—AD versus health control (HC) and early mild cognitive impairment (MCI) (EMCI) versus late MCI (LMCI). The proposed method performed very well for all four types of dataset. For instance, for the AD versus HC group, the classifier achieved an area under curve (AUC) of more than 89% for each dataset. For the EMCI versus LMCI group, the classifier achieved an AUC of more than 80% for every dataset. Moreover, we also calculated Cohen kappa and Jaccard index statistical values for all datasets to evaluate the classification reliability. Finally, we compared our results with those of recently published state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Cui, Hailun, Yingying Zhang, Yijie Zhao, Luis Manssuer, Chencheng Zhang, Dianyou Li, Wenjuan Liu, Bomin Sun, and Valerie Voon. "17 Neuromodification of refractory obsessive-compulsive disorder (OCD): evidence from cognitive, structural and functional remodelling of anterior capsulotomy." Journal of Neurology, Neurosurgery & Psychiatry 93, no. 12 (November 14, 2022): e3.9. http://dx.doi.org/10.1136/jnnp-2022-bnpa.17.

Full text
Abstract:
ObjectivesConverging evidence supports the effectiveness of anterior capsulotomy in the management of selected patients with refractory obsessive-compulsive disorder (OCD). Less is understood about the post-operative neural changes that contribute to clinical improvements or impairments that might occur which potentially could be avoided. The objective of this study is to identify the neuropsychological changes underlying the symptomatic relief in capsulotomy using an affective task-based fMRI study, and to discern potential post-operative cognitive changes associated with the surgery itself.MethodsA task-based fMRI study using a modified aversive monetary incentive delay paradigm with negative imagery (separated into two phases: anticipation and feedback) and subsequent expectancy violation was applied to study emotional regulation, with specific focus on the functional changes along the fronto-striatal pathway. Selected tests from the Cambridge Neuropsychological Automated Test Battery (CANTAB) covering attention, memory and executive function known to be impaired in OCD were administered to assess the cognitive function in three groups of age, gender matched participants: OCD patients (OCD), OCD capsulotomy patients (at least six months after surgery, CAP), and healthy controls (HC). Voxel-based morphometry (VBM) was applied to detect whole-brain grey matter volume differences between HC, OCD and CAP, and its association with cognitive deficits.ResultsCAP showed lower OCD severity (Yale Brown Obsessive Compulsive Score), and better activities of daily living and quality of life scores compared to OCD controls with no differences in depression or anxiety scores.Task-based fMRI revealed decreased nucleus accumbens (NAc) activity during aversive anticipation in CAP, which correlated with the symptomatic ratings after surgery. Whole-brain analyses during the feedback phase found interaction effects in the left pregenual anterior cingulate cortex (pgACC) and the left inferior frontal cortex (IFG), where CAP showed lower activity towards the aversive outcome and towards the expected but missing aversive image during the expectancy violation phase, with both positively correlated with symptomatic ratings. Attenuated functional connectivity between seed NAc and pgACC was found in CAP as expected during aversive versus neutral anticipation (aversive>neutral), suggesting a surgical impact of capsulotomy through downregulating the functional communication along the fronto-striatal pathway.Cognitive testing focusing on the differences between OCD and CAP or the potential effect of capsulotomy showed greater impairments in CAP in paired associative learning (PAL). All other cognitive measures were no different from OCD controls. CAP exhibited overall reduction of grey matter volume (GMV) in the ventral striatum, thalamus and left inferior frontal gyrus (IFG) relative to OCD controls. The total PAL error in CAP was associated with lower GMV in the left inferior frontal gyrus.ConclusionsWe show improvements in OCD severity, quality of life and disability post-capsulotomy. Potential impairments in cognitive status were limited to associative learning correlating with lower left inferior frontal cortex volumes which may reflect underlying severity or be related to capsulotomy effects. Our findings suggest targeting connectivity between NAc and pgACC, similar to the optimal target for deep brain stimulation in OCD, with post-operative effects modifying aversive processing. In contrast, potentially preserving tracts to the left inferior frontal cortex might be indicated. Our study contributes to the literature of anterior capsulotomy as an effective and well-tolerated treatment option for selected patients with refractory OCD and highlights overlaps between deep brain stimulation and capsulotomy in optimal targeting.
APA, Harvard, Vancouver, ISO, and other styles
11

Moiseev, Daniel, Bo Hu, and Jun Li. "Morphometric analysis of peripheral myelinated nerve fibers through deep learning." Journal of the Peripheral Nervous System 24, no. 1 (December 11, 2018): 87–93. http://dx.doi.org/10.1111/jns.12293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sengupta, Duhita, Sk Nishan Ali, Aditya Bhattacharya, Joy Mustafi, Asima Mukhopadhyay, and Kaushik Sengupta. "A deep hybrid learning pipeline for accurate diagnosis of ovarian cancer based on nuclear morphology." PLOS ONE 17, no. 1 (January 7, 2022): e0261181. http://dx.doi.org/10.1371/journal.pone.0261181.

Full text
Abstract:
Nuclear morphological features are potent determining factors for clinical diagnostic approaches adopted by pathologists to analyze the malignant potential of cancer cells. Considering the structural alteration of the nucleus in cancer cells, various groups have developed machine learning techniques based on variation in nuclear morphometric information like nuclear shape, size, nucleus-cytoplasm ratio and various non-parametric methods like deep learning have also been tested for analyzing immunohistochemistry images of tissue samples for diagnosing various cancers. We aim to correlate the morphometric features of the nucleus along with the distribution of nuclear lamin proteins with classical machine learning to differentiate between normal and ovarian cancer tissues. It has already been elucidated that in ovarian cancer, the extent of alteration in nuclear shape and morphology can modulate genetic changes and thus can be utilized to predict the outcome of low to a high form of serous carcinoma. In this work, we have performed exhaustive imaging of ovarian cancer versus normal tissue and developed a dual pipeline architecture that combines the matrices of morphometric parameters with deep learning techniques of auto feature extraction from pre-processed images. This novel Deep Hybrid Learning model, though derived from classical machine learning algorithms and standard CNN, showed a training and validation AUC score of 0.99 whereas the test AUC score turned out to be 1.00. The improved feature engineering enabled us to differentiate between cancerous and non-cancerous samples successfully from this pilot study.
APA, Harvard, Vancouver, ISO, and other styles
13

Tan, Hui Yuan, Zhi Yun Goh, Kar-Hoe Loh, Amy Yee-Hui Then, Hasmahzaiti Omar, and Siow-Wee Chang. "Cephalopod species identification using integrated analysis of machine learning and deep learning approaches." PeerJ 9 (August 9, 2021): e11825. http://dx.doi.org/10.7717/peerj.11825.

Full text
Abstract:
Background Despite the high commercial fisheries value and ecological importance as prey item for higher marine predators, very limited taxonomic work has been done on cephalopods in Malaysia. Due to the soft-bodied nature of cephalopods, the identification of cephalopod species based on the beak hard parts can be more reliable and useful than conventional body morphology. Since the traditional method for species classification was time-consuming, this study aimed to develop an automated identification model that can identify cephalopod species based on beak images. Methods A total of 174 samples of seven cephalopod species were collected from the west coast of Peninsular Malaysia. Both upper and lower beaks were extracted from the samples and the left lateral views of upper and lower beak images were acquired. Three types of traditional morphometric features were extracted namely grey histogram of oriented gradient (HOG), colour HOG, and morphological shape descriptor (MSD). In addition, deep features were extracted by using three pre-trained convolutional neural networks (CNN) models which are VGG19, InceptionV3, and Resnet50. Eight machine learning approaches were used in the classification step and compared for model performance. Results The results showed that the Artificial Neural Network (ANN) model achieved the best testing accuracy of 91.14%, using the deep features extracted from the VGG19 model from lower beak images. The results indicated that the deep features were more accurate than the traditional features in highlighting morphometric differences from the beak images of cephalopod species. In addition, the use of lower beaks of cephalopod species provided better results compared to the upper beaks, suggesting that the lower beaks possess more significant morphological differences between the studied cephalopod species. Future works should include more cephalopod species and sample size to enhance the identification accuracy and comprehensiveness of the developed model.
APA, Harvard, Vancouver, ISO, and other styles
14

Bom, C. R., A. Cortesi, G. Lucatelli, L. O. Dias, P. Schubert, G. B. Oliveira Schwarz, N. M. Cardoso, et al. "Deep Learning assessment of galaxy morphology in S-PLUS Data Release 1." Monthly Notices of the Royal Astronomical Society 507, no. 2 (July 26, 2021): 1937–55. http://dx.doi.org/10.1093/mnras/stab1981.

Full text
Abstract:
ABSTRACT The morphological diversity of galaxies is a relevant probe of galaxy evolution and cosmological structure formation, but the classification of galaxies in large sky surveys is becoming a significant challenge. We use data from the Stripe-82 area observed by the Southern Photometric Local Universe Survey (S-PLUS) in 12 optical bands, and present a catalogue of the morphologies of galaxies brighter than r = 17 mag determined both using a novel multiband morphometric fitting technique and Convolutional Neural Networks (CNNs) for computer vision. Using the CNNs, we find that, compared to our baseline results with three bands, the performance increases when using 5 broad and 3 narrow bands, but is poorer when using the full 12 band S-PLUS image set. However, the best result is still achieved with just three optical bands when using pre-trained network weights from an ImageNet data set. These results demonstrate the importance of using prior knowledge about neural network weights based on training in unrelated, extensive data sets, when available. Our catalogue contains 3274 galaxies in Stripe-82 that are not present in Galaxy Zoo 1 (GZ1), and we also provide our classifications for 4686 galaxies that were considered ambiguous in GZ1. Finally, we present a prospect of a novel way to take advantage of 12 band information for morphological classification using morphometric features, and we release a model that has been pre-trained on several bands that could be adapted for classifications using data from other surveys. The morphological catalogues are publicly available.
APA, Harvard, Vancouver, ISO, and other styles
15

Courtenay, Lloyd A., Rosa Huguet, Diego González-Aguilera, and José Yravedra. "A Hybrid Geometric Morphometric Deep Learning Approach for Cut and Trampling Mark Classification." Applied Sciences 10, no. 1 (December 23, 2019): 150. http://dx.doi.org/10.3390/app10010150.

Full text
Abstract:
The concept of equifinality is currently one of the largest issues in taphonomy, frequently leading analysts to erroneously interpret the formation and functionality of archaeological and paleontological sites. An example of this equifinality can be found in the differentiation between anthropic cut marks and other traces on bone produced by natural agents, such as that of sedimentary abrasion and trampling. These issues are a key component in the understanding of early human evolution, yet frequently rely on qualitative features for their identification. Unfortunately, qualitative data is commonly susceptible to subjectivity, producing insecurity in research through analyst experience. The present study intends to confront these issues through a hybrid methodological approach. Here, we combine Geometric Morphometric data, 3D digital microscopy, and Deep Learning Neural Networks to provide a means of empirically classifying taphonomic traces on bone. Results obtained are able to reach over 95% classification, providing a possible means of overcoming taphonomic equifinality in the archaeological and paleontological register.
APA, Harvard, Vancouver, ISO, and other styles
16

Aaraji, Zahraa S., and Hawraa H. Abbas. "Automatic Diagnosis of Alzheimer’s Disease Using Deep Learning Techniques." NeuroQuantology 19, no. 11 (December 11, 2021): 126–40. http://dx.doi.org/10.14704/nq.2021.19.11.nq21183.

Full text
Abstract:
Neuroimaging data analysis has attracted a great deal of attention with respect to the accurate diagnosis of Alzheimer’s disease (AD). Magnetic Resonance Imaging (MRI) scanners have thus been commonly used to study AD-related brain structural variations, providing images that demonstrate both morphometric and anatomical changes in the human brain. Deep learning algorithms have already been effectively exploited in other medical image processing applications to identify features and recognise patterns for many diseases that affect the brain and other organs; this paper extends on this to describe a novel computer aided software pipeline for the classification and early diagnosis of AD. The proposed method uses two types of three-dimensional Convolutional Neural Networks (3D CNN) to facilitate brain MRI data analysis and automatic feature extraction and classification, so that pre-processing and post-processing are utilised to normalise the MRI data and facilitate pattern recognition. The experimental results show that the proposed approach achieves 97.5%, 82.5%, and 83.75% accuracy in terms of binary classification AD vs. cognitively normal (CN), CN vs. mild cognitive impairment (MCI) and MCI vs. AD, respectively, as well as 85% accuracy for multi class-classification, based on publicly available data sets from the Alzheimer’s disease Neuroimaging Initiative (ADNI).
APA, Harvard, Vancouver, ISO, and other styles
17

Vaickus, Louis J., Arief A. Suriawinata, Jason W. Wei, and Xiaoying Liu. "Automating the Paris System for urine cytopathology—A hybrid deep‐learning and morphometric approach." Cancer Cytopathology 127, no. 2 (January 31, 2019): 98–115. http://dx.doi.org/10.1002/cncy.22099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ruchay, Alexey, Vitaly Kober, Konstantin Dorofeev, Vladimir Kolpakov, Alexey Gladkov, and Hao Guo. "Live Weight Prediction of Cattle Based on Deep Regression of RGB-D Images." Agriculture 12, no. 11 (October 28, 2022): 1794. http://dx.doi.org/10.3390/agriculture12111794.

Full text
Abstract:
Predicting the live weight of cattle helps us monitor the health of animals, conduct genetic selection, and determine the optimal timing of slaughter. On large farms, accurate and expensive industrial scales are used to measure live weight. However, a promising alternative is to estimate live weight using morphometric measurements of livestock and then apply regression equations relating such measurements to live weight. Manual measurements on animals using a tape measure are time-consuming and stressful for the animals. Therefore, computer vision technologies are now increasingly used for non-contact morphometric measurements. The paper proposes a new model for predicting live weight based on augmenting three-dimensional clouds in the form of flat projections and image regression with deep learning. It is shown that on real datasets, the accuracy of weight measurement using the proposed model reaches 91.6%. We also discuss the potential applicability of the proposed approach to animal husbandry.
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Wei-Hsiang, Chih-Hao Li, Ren-Ching Wang, Chao-Yuan Yeh, and Shih-Sung Chuang. "Machine Learning Based on Morphological Features Enables Classification of Primary Intestinal T-Cell Lymphomas." Cancers 13, no. 21 (October 30, 2021): 5463. http://dx.doi.org/10.3390/cancers13215463.

Full text
Abstract:
The aim of this study was to investigate the feasibility of using machine learning techniques based on morphological features in classifying two subtypes of primary intestinal T-cell lymphomas (PITLs) defined according to the WHO criteria: monomorphic epitheliotropic intestinal T-cell lymphoma (MEITL) versus intestinal T-cell lymphoma, not otherwise specified (ITCL-NOS), which is considered a major challenge for pathological diagnosis. A total of 40 histopathological whole-slide images (WSIs) from 40 surgically resected PITL cases were used as the dataset for model training and testing. A deep neural network was trained to detect and segment the nuclei of lymphocytes. Quantitative nuclear morphometrics were further computed from these predicted contours. A decision-tree-based machine learning algorithm, XGBoost, was then trained to classify PITL cases into two disease subtypes using these nuclear morphometric features. The deep neural network achieved an average precision of 0.881 in the cell segmentation work. In terms of classifying MEITL versus ITCL-NOS, the XGBoost model achieved an area under receiver operating characteristic curve (AUC) of 0.966. Our research demonstrated an accurate, human-interpretable approach to using machine learning algorithms for reducing the high dimensionality of image features and classifying T cell lymphomas that present challenges in morphologic diagnosis. The quantitative nuclear morphometric features may lead to further discoveries concerning the relationship between cellular phenotype and disease status.
APA, Harvard, Vancouver, ISO, and other styles
20

Vaickus, Louis, and Xiaoying Liu. "Automating The Paris System for Urine Reporting Cytopathology: A Hybrid Morphometric and Deep Learning Approach." Journal of the American Society of Cytopathology 7, no. 5 (September 2018): S84. http://dx.doi.org/10.1016/j.jasc.2018.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bouteldja, Nassim, Barbara M. Klinkhammer, Roman D. Bülow, Patrick Droste, Simon W. Otten, Saskia Freifrau von Stillfried, Julia Moellmann, et al. "Deep Learning–Based Segmentation and Quantification in Experimental Kidney Histopathology." Journal of the American Society of Nephrology 32, no. 1 (November 5, 2020): 52–68. http://dx.doi.org/10.1681/asn.2020050597.

Full text
Abstract:
BackgroundNephropathologic analyses provide important outcomes-related data in experiments with the animal models that are essential for understanding kidney disease pathophysiology. Precision medicine increases the demand for quantitative, unbiased, reproducible, and efficient histopathologic analyses, which will require novel high-throughput tools. A deep learning technique, the convolutional neural network, is increasingly applied in pathology because of its high performance in tasks like histology segmentation.MethodsWe investigated use of a convolutional neural network architecture for accurate segmentation of periodic acid–Schiff-stained kidney tissue from healthy mice and five murine disease models and from other species used in preclinical research. We trained the convolutional neural network to segment six major renal structures: glomerular tuft, glomerulus including Bowman’s capsule, tubules, arteries, arterial lumina, and veins. To achieve high accuracy, we performed a large number of expert-based annotations, 72,722 in total.ResultsMulticlass segmentation performance was very high in all disease models. The convolutional neural network allowed high-throughput and large-scale, quantitative and comparative analyses of various models. In disease models, computational feature extraction revealed interstitial expansion, tubular dilation and atrophy, and glomerular size variability. Validation showed a high correlation of findings with current standard morphometric analysis. The convolutional neural network also showed high performance in other species used in research—including rats, pigs, bears, and marmosets—as well as in humans, providing a translational bridge between preclinical and clinical studies.ConclusionsWe developed a deep learning algorithm for accurate multiclass segmentation of digital whole-slide images of periodic acid–Schiff-stained kidneys from various species and renal disease models. This enables reproducible quantitative histopathologic analyses in preclinical models that also might be applicable to clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
22

Hurlbut, Thomas, and Douglas Clay. "Morphometric and meristic differences between shallow- and deep-water populations of white hake (Urophycis tenuis) in the southern Gulf of St. Lawrence." Canadian Journal of Fisheries and Aquatic Sciences 55, no. 10 (October 1, 1998): 2274–82. http://dx.doi.org/10.1139/f98-110.

Full text
Abstract:
Linear discriminant function analysis of morphometric and meristic characters was used to assess the extent of differentiation between shallow- (<100 m) and deep- (>200 m) water populations of white hake (Urophycis tenuis) from the southern Gulf of St. Lawrence (NAFO Division 4T). Although meristic characters provided some evidence for stock separation, the best statistical separation was obtained with morphometric characters. Morphometric discriminant functions derived from "learning" samples were able to correctly classify 82 and 84% of the "test" samples for both females and males, respectively. A greater relative snout length in fish sampled from along the Laurentian Channel compared with those from the Northumberland Strait area was the primary discriminating character for both sexes. Our morphological evidence and previous tagging and distributional studies suggest that the populations from these two areas represent separate stocks; therefore, the traditional management unit for white hake in NAFO Division 4T may no longer be appropriate.
APA, Harvard, Vancouver, ISO, and other styles
23

Al-Waisy, Alaa S., Abdulrahman Alruban, Shumoos Al-Fahdawi, Rami Qahwaji, Georgios Ponirakis, Rayaz A. Malik, Mazin Abed Mohammed, and Seifedine Kadry. "CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells." Mathematics 10, no. 3 (January 20, 2022): 320. http://dx.doi.org/10.3390/math10030320.

Full text
Abstract:
The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, μm2), mean cell perimeter (MCP, μm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features. The Bland–Altman plots showed excellent agreement. The percentage difference between the manual and automated estimations was superior for the CellsDeepNet system compared to the CEAS system and other state-of-the-art CEC segmentation systems on three large and challenging corneal endothelium image datasets captured using two different ophthalmic devices.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Qi, and Mary Qu Yang. "Comparison of machine learning approaches for enhancing Alzheimer’s disease classification." PeerJ 9 (February 25, 2021): e10549. http://dx.doi.org/10.7717/peerj.10549.

Full text
Abstract:
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder, accounting for nearly 60% of all dementia cases. The occurrence of the disease has been increasing rapidly in recent years. Presently about 46.8 million individuals suffer from AD worldwide. The current absence of effective treatment to reverse or stop AD progression highlights the importance of disease prevention and early diagnosis. Brain structural Magnetic Resonance Imaging (MRI) has been widely used for AD detection as it can display morphometric differences and cerebral structural changes. In this study, we built three machine learning-based MRI data classifiers to predict AD and infer the brain regions that contribute to disease development and progression. We then systematically compared the three distinct classifiers, which were constructed based on Support Vector Machine (SVM), 3D Very Deep Convolutional Network (VGGNet) and 3D Deep Residual Network (ResNet), respectively. To improve the performance of the deep learning classifiers, we applied a transfer learning strategy. The weights of a pre-trained model were transferred and adopted as the initial weights of our models. Transferring the learned features significantly reduced training time and increased network efficiency. The classification accuracy for AD subjects from elderly control subjects was 90%, 95%, and 95% for the SVM, VGGNet and ResNet classifiers, respectively. Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to show discriminative regions that contributed most to the AD classification by utilizing the learned spatial information of the 3D-VGGNet and 3D-ResNet models. The resulted maps consistently highlighted several disease-associated brain regions, particularly the cerebellum which is a relatively neglected brain region in the present AD study. Overall, our comparisons suggested that the ResNet model provided the best classification performance as well as more accurate localization of disease-associated regions in the brain compared to the other two approaches.
APA, Harvard, Vancouver, ISO, and other styles
25

Meshkov, Alexandr, Anvar Khafizov, Alexey Buzmakov, Inna Bukreeva, Olga Junemann, Michela Fratini, Alessia Cedola, et al. "Deep Learning-Based Segmentation of Post-Mortem Human’s Olfactory Bulb Structures in X-ray Phase-Contrast Tomography." Tomography 8, no. 4 (July 22, 2022): 1854–68. http://dx.doi.org/10.3390/tomography8040156.

Full text
Abstract:
The human olfactory bulb (OB) has a laminar structure. The segregation of cell populations in the OB image poses a significant challenge because of indistinct boundaries of the layers. Standard 3D visualization tools usually have a low resolution and cannot provide the high accuracy required for morphometric analysis. X-ray phase contrast tomography (XPCT) offers sufficient resolution and contrast to identify single cells in large volumes of the brain. The numerous microanatomical structures detectable in XPCT image of the OB, however, greatly complicate the manual delineation of OB neuronal cell layers. To address the challenging problem of fully automated segmentation of XPCT images of human OB morphological layers, we propose a new pipeline for tomographic data processing. Convolutional neural networks (CNN) were used to segment XPCT image of native unstained human OB. Virtual segmentation of the whole OB and an accurate delineation of each layer in a healthy non-demented OB is mandatory as the first step for assessing OB morphological changes in smell impairment research. In this framework, we proposed an effective tool that could help to shed light on OB layer-specific degeneration in patients with olfactory disorder.
APA, Harvard, Vancouver, ISO, and other styles
26

Dimauro, Giovanni, Vitoantonio Bevilacqua, Pio Fina, Domenico Buongiorno, Antonio Brunetti, Sergio Latrofa, Michele Cassano, and Matteo Gelardi. "Comparative Analysis of Rhino-Cytological Specimens with Image Analysis and Deep Learning Techniques." Electronics 9, no. 6 (June 8, 2020): 952. http://dx.doi.org/10.3390/electronics9060952.

Full text
Abstract:
Cytological study of the nasal mucosa (also known as rhino-cytology) represents an important diagnostic aid that allows highlighting of the presence of some types of rhinitis through the analysis of cellular features visible under a microscope. Nowadays, the automated detection and classification of cells benefit from the capacity of deep learning techniques in processing digital images of the cytological preparation. Even though the results of such automatic systems need to be validated by a specialized rhino-cytologist, this technology represents a valid support that aims at increasing the accuracy of the analysis while reducing the required time and effort. The quality of the rhino-cytological preparation, which is clearly important for the microscope observation phase, is also fundamental for the automatic classification process. In fact, the slide-preparing technique turns out to be a crucial factor among the multiple ones that may modify the morphological and chromatic characteristics of the cells. This paper aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classification phases in rhino-cytology. Firstly, a comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the field images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difficult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides. As a marginal part of this study, a performance assessment of the computer-aided diagnosis (CAD) system called Rhino-cyt has also been carried out on both groups of image slide types.
APA, Harvard, Vancouver, ISO, and other styles
27

Morinaga, Hiroyuki, Yoh Sugawara, Jingyuan Chen, Jeevendra Martyn, and Shingo Yasuhara. "99 Feasibility of Deep Learning-based Automatic Myofiber Size Measurement for Burn-induced Muscle Wasting and Its Reversal." Journal of Burn Care & Research 43, Supplement_1 (March 23, 2022): S66. http://dx.doi.org/10.1093/jbcr/irac012.102.

Full text
Abstract:
Abstract Introduction Patients with major burn injury (BI) often develop muscle wasting (MW) and mitochondrial dysfunctions (MD), which affect their prognosis. We have recently shown that auto/mitophagy response is defective in BI model and can be mitigate by trehalose treatment. Though auto/mitophagy is widely accepted as the quality control (QC) system of cellular components including mitochondria, the relationship among MD, auto/mitophagy response defect, and MW was unclear. Furthermore, to evaluate MW precisely by morphometric analyses was difficult, due to the ehavy workload of counting the size of muscle cross sectional area manually and analyzing the data. Thus, we have set up a streamline of whole section image capturing, analyzing with cutting edge deep learning-based method, processing it via image J-based program. Using this system, we have tested the efficacy of trehalose on mitigating MW in BI-treated mice. Methods First, the effect of auto/mitophagy modulator on normalizing defective auto/mitiophagy maturation was confirmed by in vivo microscopy of tfLC3-expressing mice with BI (30% BSA) or sham-burn (SB) control. A mitophagy inducer, CCCP was injected to induce mitophagy, and the auto/mitophagosome maturation was monitored with or without trehalose treatment. In a separate experiment, tibialis muscles were harvested at post-burn day (PBD)-7, with or without trehalose treatment (2g/kg/day, i.p.), cryosectioned, and stained by anti-laminin antibody. The entire tissue cross-sectional microscopic images were captured, fed into a cellpose, and processed in ImageJ and Prizm for automatic calculation of the cross sectional area (CSA). Results In vivo microscopic monitoring of auto/mitophagosome maturation revealed BI-induced maturation defect when treated by CCCP, which was rescued by trehalose treatment. Next, with MW analysis experiment, cross-sectional morphometric analysis of tibialis anterior myofibers showed a typical bi-phasic pattern of CSA distribution (large size population and small size population) in the control group. BI treatment showed a significant CSA decrease in both populations, which was effectively treated by trehalose. The average CSA was as follows (1843.0, 1245.3, 1683.9 for SB control, BI, BI+trehalose, respectively, in micron^2), and in accordance with manual counting measurement. Conclusions Normalizing defective auto/mitophagy response was shown an effective therapeutic approach to mitigate BI-induced MW. Deep learning-based size counting method is a feasible technique for a systematic MW analysis. Note that in our data, trehalose does not function in increasing the basal level of autophagy, but it mitigates the defective response of auto/mitophagy to the auto/mitophagic stimulation, by normalizing the maturation process.
APA, Harvard, Vancouver, ISO, and other styles
28

Knyaz, V. A., A. A. Maksimov, M. M. Novikov, and A. V. Urmashova. "AUTOMATIC ANTHROPOLOGICAL LANDMARKS RECOGNITION AND MEASUREMENTS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-2/W1-2021 (April 15, 2021): 137–42. http://dx.doi.org/10.5194/isprs-archives-xliv-2-w1-2021-137-2021.

Full text
Abstract:
Abstract. Many anthropological researches require identification and measurement of craniometric and cephalometric landmarks which provide valuable information about the shape of a head. This information is necessary for morphometric analysis, face approximation, craniafacial identification etc. Traditional techniques use special anthropological tools to perform required measurements, identification of landmarks usually being made by an expert-anthropologist. Modern techniques of optical 3D measurements such as photogrammetry, computer tomography, laser 3D scanning provide new possibilities for acquiring accurate 2D and 3D data of high resolution, thus creating new conditions for anthropological data analysis. Traditional anthropological manual point measurements can be substituted by analysis of accurate textured 3D models, which allow to retrieve more information about studied object and easily to share data for independent analysis. The paper presents the deep learning technique for anthropological landmarks identification and accurate 3D measurements. Photogrammetric methods and their practical implementation in the automatic system for accurate digital 3D reconstruction of anthropological objects are described.
APA, Harvard, Vancouver, ISO, and other styles
29

Șerbănescu, Mircea-Sebastian, Raluca Maria Bungărdean, Carmen Georgiu, and Maria Crișan. "Nodular and Micronodular Basal Cell Carcinoma Subtypes Are Different Tumors Based on Their Morphological Architecture and Their Interaction with the Surrounding Stroma." Diagnostics 12, no. 7 (July 5, 2022): 1636. http://dx.doi.org/10.3390/diagnostics12071636.

Full text
Abstract:
Basal cell carcinoma (BCC) is the most frequent cancer of the skin and comprises low-risk and high-risk subtypes. We selected a low-risk subtype, namely, nodular (N), and a high-risk subtype, namely, micronodular (MN), with the aim to identify differences between them using a classical morphometric approach through a gray-level co-occurrence matrix and histogram analysis, as well as an approach based on deep learning semantic segmentation. From whole-slide images, pathologists selected 216 N and 201 MN BCC images. The two groups were then manually segmented and compared based on four morphological areas: center of the BCC islands (tumor, T), peripheral palisading of the BCC islands (touching tumor, TT), peritumoral cleft (PC) and surrounding stroma (S). We found that the TT pattern varied the least, while the PC pattern varied the most between the two subtypes. The combination of two distinct analysis approaches yielded fresh insights into the characterization of BCC, and thus, we were able to describe two different morphological patterns for the T component of the two subtypes.
APA, Harvard, Vancouver, ISO, and other styles
30

Bora, Khushi, A. Y. Kerle, Vaidehi Phadke, Madiha Mujawar, and Amruta Anuse. "A Review of Brain Tumor Detection." International Journal for Research in Applied Science and Engineering Technology 10, no. 12 (December 31, 2022): 680–84. http://dx.doi.org/10.22214/ijraset.2022.47992.

Full text
Abstract:
Abstract: Brain tumor is the most commonly occurring malignancy among human beings, so study of brain tumor is important. The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer aided technology becomes very necessary to overcome these limitations. In this paper, we propose a CNN model to identify or detect tumor from the brain magnetic resonance imaging (MRI). The mobile application is designed with a custom designed neural network that uses a combination of deep learning and image processing techniques to identify the presence of a brain tumor in medical images. The application is capable of providing accurate results within a short amount of time and mainly uses a combination of radiomics and morphometric features to evaluate the medical images. The application is also capable of providing support to physicians in diagnosing brain tumors.
APA, Harvard, Vancouver, ISO, and other styles
31

Wöber, Wilfried, Manuel Curto, Papius Tibihika, Paul Meulenbroek, Esayas Alemayehu, Lars Mehnen, Harald Meimberg, and Peter Sykacek. "Identifying geographically differentiated features of Ethopian Nile tilapia (Oreochromis niloticus) morphology with machine learning." PLOS ONE 16, no. 4 (April 15, 2021): e0249593. http://dx.doi.org/10.1371/journal.pone.0249593.

Full text
Abstract:
Visual characteristics are among the most important features for characterizing the phenotype of biological organisms. Color and geometric properties define population phenotype and allow assessing diversity and adaptation to environmental conditions. To analyze geometric properties classical morphometrics relies on biologically relevant landmarks which are manually assigned to digital images. Assigning landmarks is tedious and error prone. Predefined landmarks may in addition miss out on information which is not obvious to the human eye. The machine learning (ML) community has recently proposed new data analysis methods which by uncovering subtle features in images obtain excellent predictive accuracy. Scientific credibility demands however that results are interpretable and hence to mitigate the black-box nature of ML methods. To overcome the black-box nature of ML we apply complementary methods and investigate internal representations with saliency maps to reliably identify location specific characteristics in images of Nile tilapia populations. Analyzing fish images which were sampled from six Ethiopian lakes reveals that deep learning improves on a conventional morphometric analysis in predictive performance. A critical assessment of established saliency maps with a novel significance test reveals however that the improvement is aided by artifacts which have no biological interpretation. More interpretable results are obtained by a Bayesian approach which allows us to identify genuine Nile tilapia body features which differ in dependence of the animals habitat. We find that automatically inferred Nile tilapia body features corroborate and expand the results of a landmark based analysis that the anterior dorsum, the fish belly, the posterior dorsal region and the caudal fin show signs of adaptation to the fish habitat. We may thus conclude that Nile tilapia show habitat specific morphotypes and that a ML analysis allows inferring novel biological knowledge in a reproducible manner.
APA, Harvard, Vancouver, ISO, and other styles
32

Crippa, Chiara, Elena Valbuzzi, Paolo Frattini, Giovanni B. Crosta, Margherita C. Spreafico, and Federico Agliardi. "Semi-automated regional classification of the style of activity of slow rock-slope deformations using PS InSAR and SqueeSAR velocity data." Landslides 18, no. 7 (April 6, 2021): 2445–63. http://dx.doi.org/10.1007/s10346-021-01654-0.

Full text
Abstract:
AbstractLarge slow rock-slope deformations, including deep-seated gravitational slope deformations and large landslides, are widespread in alpine environments. They develop over thousands of years by progressive failure, resulting in slow movements that impact infrastructures and can eventually evolve into catastrophic rockslides. A robust characterization of their style of activity is thus required in a risk management perspective. We combine an original inventory of slow rock-slope deformations with different PS-InSAR and SqueeSAR datasets to develop a novel, semi-automated approach to characterize and classify 208 slow rock-slope deformations in Lombardia (Italian Central Alps) based on their displacement rate, kinematics, heterogeneity and morphometric expression. Through a peak analysis of displacement rate distributions, we characterize the segmentation of mapped landslides and highlight the occurrence of nested sectors with differential activity and displacement rates. Combining 2D decomposition of InSAR velocity vectors and machine learning classification, we develop an automatic approach to characterize the kinematics of each landslide. Then, we sequentially combine principal component and K-medoids cluster analyses to identify groups of slow rock-slope deformations with consistent styles of activity. Our methodology is readily applicable to different landslide datasets and provides an objective and cost-effective support to land planning and the prioritization of local-scale studies aimed at granting safety and infrastructure integrity.
APA, Harvard, Vancouver, ISO, and other styles
33

Arsalan, Muhammad, Adnan Haider, Se Woon Cho, Yu Hwan Kim, and Kang Ryoung Park. "Human Blastocyst Components Detection Using Multiscale Aggregation Semantic Segmentation Network for Embryonic Analysis." Biomedicines 10, no. 7 (July 15, 2022): 1717. http://dx.doi.org/10.3390/biomedicines10071717.

Full text
Abstract:
Infertility is one of the most important health concerns worldwide. It is characterized by not being successful of pregnancy after some periods of periodic unprotected sexual intercourse. In vitro fertilization (IVF) is an assisted reproduction technique that efficiently addresses infertility. IVF replaces the actual mode of reproduction through a manual procedure wherein embryos are cultivated in a controlled laboratory environment until they reach the blastocyst stage. The standard IVF procedure includes the transfer of one or two blastocysts from several blastocysts that are grown in a controlled environment. The morphometric properties of blastocysts with their compartments such as trophectoderm (TE), zona pellucida (ZP), inner cell mass (ICM), and blastocoel (BL), are analyzed through manual microscopic analysis to predict viability. Deep learning has been extensively used for medical diagnosis and analysis and can be a powerful tool to automate the morphological analysis of human blastocysts. However, the existing approaches are inaccurate and require extensive preprocessing and expensive architectures. Thus, to cope with the automatic detection of blastocyst components, this study proposed a novel multiscale aggregation semantic segmentation network (MASS-Net) that combined four different scales via depth-wise concatenation. The extensive use of depthwise separable convolutions resulted in a decrease in the number of trainable parameters. Further, the innovative multiscale design provided rich spatial information of different resolutions, thereby achieving good segmentation performance without a very deep architecture. MASS-Net utilized 2.06 million trainable parameters and accurately detects TE, ZP, ICM, and BL without using preprocessing stages. Moreover, it can provide a separate binary mask for each blastocyst component simultaneously, and these masks provide the structure of each component for embryonic analysis. Further, the proposed MASS-Net was evaluated using publicly available human blastocyst (microscopic) imaging data. The experimental results revealed that it can effectively detect TE, ZP, ICM, and BL with mean Jaccard indices of 79.08, 84.69, 85.88%, and 89.28%, respectively, for embryological analysis, which was higher than those of the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Rebsamen, Michael, Yannick Suter, Roland Wiest, Mauricio Reyes, and Christian Rummel. "Brain Morphometry Estimation: From Hours to Seconds Using Deep Learning." Frontiers in Neurology 11 (April 8, 2020). http://dx.doi.org/10.3389/fneur.2020.00244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gibbs, Jonathon A., Lorna Mcausland, Carlos A. Robles-Zazueta, Erik H. Murchie, and Alexandra J. Burgess. "A Deep Learning Method for Fully Automatic Stomatal Morphometry and Maximal Conductance Estimation." Frontiers in Plant Science 12 (December 2, 2021). http://dx.doi.org/10.3389/fpls.2021.780180.

Full text
Abstract:
Stomata are integral to plant performance, enabling the exchange of gases between the atmosphere and the plant. The anatomy of stomata influences conductance properties with the maximal conductance rate, gsmax, calculated from density and size. However, current calculations of stomatal dimensions are performed manually, which are time-consuming and error prone. Here, we show how automated morphometry from leaf impressions can predict a functional property: the anatomical gsmax. A deep learning network was derived to preserve stomatal morphometry via semantic segmentation. This forms part of an automated pipeline to measure stomata traits for the estimation of anatomical gsmax. The proposed pipeline achieves accuracy of 100% for the distinction (wheat vs. poplar) and detection of stomata in both datasets. The automated deep learning-based method gave estimates for gsmax within 3.8 and 1.9% of those values manually calculated from an expert for a wheat and poplar dataset, respectively. Semantic segmentation provides a rapid and repeatable method for the estimation of anatomical gsmax from microscopic images of leaf impressions. This advanced method provides a step toward reducing the bottleneck associated with plant phenotyping approaches and will provide a rapid method to assess gas fluxes in plants based on stomata morphometry.
APA, Harvard, Vancouver, ISO, and other styles
36

Duan, Caohui, He Deng, Sa Xiao, Junshuai Xie, Haidong Li, Xiuchao Zhao, Dongshan Han, et al. "Accelerate gas diffusion-weighted MRI for lung morphometry with deep learning." European Radiology, July 13, 2021. http://dx.doi.org/10.1007/s00330-021-08126-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gao, Kai, Zhipeng Fan, Jianpo Su, Ling-Li Zeng, Hui Shen, Jubo Zhu, and Dewen Hu. "Deep Transfer Learning for Cerebral Cortex Using Area-Preserving Geometry Mapping." Cerebral Cortex, November 16, 2021. http://dx.doi.org/10.1093/cercor/bhab394.

Full text
Abstract:
Abstract Limited sample size hinders the application of deep learning in brain image analysis, and transfer learning is a possible solution. However, most pretrained models are 2D based and cannot be applied directly to 3D brain images. In this study, we propose a novel framework to apply 2D pretrained models to 3D brain images by projecting surface-based cortical morphometry into planar images using computational geometry mapping. Firstly, 3D cortical meshes are reconstructed from magnetic resonance imaging (MRI) using FreeSurfer and projected into 2D planar meshes with topological preservation based on area-preserving geometry mapping. Then, 2D deep models pretrained on ImageNet are adopted and fine-tuned for cortical image classification on morphometric shape metrics. We apply the framework to sex classification on the Human Connectome Project dataset and autism spectrum disorder (ASD) classification on the Autism Brain Imaging Data Exchange dataset. Moreover, a 2-stage transfer learning strategy is suggested to boost the ASD classification performance by using the sex classification as an intermediate task. Our framework brings significant improvement in sex classification and ASD classification with transfer learning. In summary, the proposed framework builds a bridge between 3D cortical data and 2D models, making 2D pretrained models available for brain image analysis in cognitive and psychiatric neuroscience.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Mingxing, Ji Zhang, Yibo Wang, Jie Wang, Alecia M. Achimovich, Scott T. Acton, and Andreas Gahlmann. "Non-invasive single-cell morphometry in living bacterial biofilms." Nature Communications 11, no. 1 (December 2020). http://dx.doi.org/10.1038/s41467-020-19866-8.

Full text
Abstract:
AbstractFluorescence microscopy enables spatial and temporal measurements of live cells and cellular communities. However, this potential has not yet been fully realized for investigations of individual cell behaviors and phenotypic changes in dense, three-dimensional (3D) bacterial biofilms. Accurate cell detection and cellular shape measurement in densely packed biofilms are challenging because of the limited resolution and low signal to background ratios (SBRs) in fluorescence microscopy images. In this work, we present Bacterial Cell Morphometry 3D (BCM3D), an image analysis workflow that combines deep learning with mathematical image analysis to accurately segment and classify single bacterial cells in 3D fluorescence images. In BCM3D, deep convolutional neural networks (CNNs) are trained using simulated biofilm images with experimentally realistic SBRs, cell densities, labeling methods, and cell shapes. We systematically evaluate the segmentation accuracy of BCM3D using both simulated and experimental images. Compared to state-of-the-art bacterial cell segmentation approaches, BCM3D consistently achieves higher segmentation accuracy and further enables automated morphometric cell classifications in multi-population biofilms.
APA, Harvard, Vancouver, ISO, and other styles
39

Lei, Du, Kun Qin, Wenbin Li, Walter H. L. Pinaya, Maxwell J. Tallman, L. Rodrigo Patino, Jeffrey R. Strawn, et al. "Brain morphometric features predict medication response in youth with bipolar disorder: a prospective randomized clinical trial." Psychological Medicine, April 8, 2022, 1–11. http://dx.doi.org/10.1017/s0033291722000757.

Full text
Abstract:
Abstract Background Identification of treatment-specific predictors of drug therapies for bipolar disorder (BD) is important because only about half of individuals respond to any specific medication. However, medication response in pediatric BD is variable and not well predicted by clinical characteristics. Methods A total of 121 youth with early course BD (acute manic/mixed episode) were prospectively recruited and randomized to 6 weeks of double-blind treatment with quetiapine (n = 71) or lithium (n = 50). Participants completed structural magnetic resonance imaging (MRI) at baseline before treatment and 1 week after treatment initiation, and brain morphometric features were extracted for each individual based on MRI scans. Positive antimanic treatment response at week 6 was defined as an over 50% reduction of Young Mania Rating Scale scores from baseline. Two-stage deep learning prediction model was established to distinguish responders and non-responders based on different feature sets. Results Pre-treatment morphometry and morphometric changes occurring during the first week can both independently predict treatment outcome of quetiapine and lithium with balanced accuracy over 75% (all p < 0.05). Combining brain morphometry at baseline and week 1 allows prediction with the highest balanced accuracy (quetiapine: 83.2% and lithium: 83.5%). Predictions in the quetiapine and lithium group were found to be driven by different morphometric patterns. Conclusions These findings demonstrate that pre-treatment morphometric measures and acute brain morphometric changes can serve as medication response predictors in pediatric BD. Brain morphometric features may provide promising biomarkers for developing biologically-informed treatment outcome prediction and patient stratification tools for BD treatment development.
APA, Harvard, Vancouver, ISO, and other styles
40

Alukaev, Danis, Semen Kiselev, Tamerlan Mustafaev, Ahatov Ainur, Bulat Ibragimov, and Tomaž Vrtovec. "A deep learning framework for vertebral morphometry and Cobb angle measurement with external validation." European Spine Journal, May 21, 2022. http://dx.doi.org/10.1007/s00586-022-07245-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chuang, Wen-Yu, Wei-Hsiang Yu, Yen-Chen Lee, Qun-Yi Zhang, Hung Chang, Lee-Yung Shih, Chi-Ju Yeh, et al. "Deep Learning-Based Nuclear Morphometry Reveals an Independent Prognostic Factor in Mantle Cell Lymphoma." American Journal of Pathology, September 2022. http://dx.doi.org/10.1016/j.ajpath.2022.08.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Huaidong, Shiqiang Zheng, Zhongxian Yang, Yi Wu, Yan Li, Jinming Qiu, Yan Cheng, et al. "Voxel-based morphometry and a deep learning model for the diagnosis of early Alzheimer’s disease based on cerebral gray matter changes." Cerebral Cortex, March 17, 2022. http://dx.doi.org/10.1093/cercor/bhac099.

Full text
Abstract:
Abstract This study aimed to analyse cerebral grey matter changes in mild cognitive impairment (MCI) using voxel-based morphometry and to diagnose early Alzheimer's disease using deep learning methods based on convolutional neural networks (CNNs) evaluating these changes. Participants (111 MCI, 73 normal cognition) underwent 3-T structural magnetic resonance imaging. The obtained images were assessed using voxel-based morphometry, including extraction of cerebral grey matter, analyses of statistical differences, and correlation analyses between cerebral grey matter and clinical cognitive scores in MCI. The CNN-based deep learning method was used to extract features of cerebral grey matter images. Compared to subjects with normal cognition, participants with MCI had grey matter atrophy mainly in the entorhinal cortex, frontal cortex, and bilateral frontotemporal lobes (p &lt; 0.0001). This atrophy was significantly correlated with the decline in cognitive scores (p &lt; 0.01). The accuracy, sensitivity, and specificity of the CNN model for identifying participants with MCI were 80.9%, 88.9%, and 75%, respectively. The area under the curve of the model was 0.891. These findings demonstrate that research based on brain morphology can provide an effective way for the clinical, non-invasive, objective evaluation and identification of early Alzheimer's disease.
APA, Harvard, Vancouver, ISO, and other styles
43

Kwak, Kichang, Marc Niethammer, Kelly S. Giovanello, Martin Styner, and Eran Dayan. "Differential Role for Hippocampal Subfields in Alzheimer’s Disease Progression Revealed with Deep Learning." Cerebral Cortex, July 29, 2021. http://dx.doi.org/10.1093/cercor/bhab223.

Full text
Abstract:
Abstract Mild cognitive impairment (MCI) is often considered the precursor of Alzheimer’s disease. However, MCI is associated with substantially variable progression rates, which are not well understood. Attempts to identify the mechanisms that underlie MCI progression have often focused on the hippocampus but have mostly overlooked its intricate structure and subdivisions. Here, we utilized deep learning to delineate the contribution of hippocampal subfields to MCI progression. We propose a dense convolutional neural network architecture that differentiates stable and progressive MCI based on hippocampal morphometry with an accuracy of 75.85%. A novel implementation of occlusion analysis revealed marked differences in the contribution of hippocampal subfields to the performance of the model, with presubiculum, CA1, subiculum, and molecular layer showing the most central role. Moreover, the analysis reveals that 10.5% of the volume of the hippocampus was redundant in the differentiation between stable and progressive MCI.
APA, Harvard, Vancouver, ISO, and other styles
44

Siegerist, Florian, Eleonora Hay, Juan Saydou Dikou, Marion Pollheimer, Anja Büscher, Jun Oh, Silvia Ribback, et al. "ScoMorphoFISH: A deep learning enabled toolbox for single‐cell single‐mRNA quantification and correlative (ultra‐)morphometry." Journal of Cellular and Molecular Medicine, May 20, 2022. http://dx.doi.org/10.1111/jcmm.17392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Intasuwan, Pittayarat, Patison Palee, Apichat Sinthubua, and Pasuk Mahakkanukrauh. "Comparison of sex determination using three methods applied to the greater sciatic notch of os coxae in a Thai population: Dry bone morphology, 2-dimensional photograph morphometry, and deep learning artificial neural network." Medicine, Science and the Law, February 10, 2022, 002580242210790. http://dx.doi.org/10.1177/00258024221079092.

Full text
Abstract:
The os coxa is commonly used for sex and age estimation with a high degree of accuracy. Our study aimed to compare the accuracy among three methods, which include a deep learning approach to increase the accuracy of sex prediction. A total sample of 250 left os coxae from a Thai population was divided into a ‘training’ set of 200 samples and a ‘test’ set of 50 samples. The age of the samples ranged from 26 to 94 years. Three methods of sex determination were assessed in this experiment: a dry bone method, an image-based method and deep learning method. The intra- and inter-observer reliabilities were also assessed in the dry bone and photo methods. Our results showed that the accuracies were 80.65%, 90.3%, and 91.95% for the dry bone, image-based, and deep learning methods, respectively. The greater sciatic notch shape was wide and symmetrical in females while narrow and asymmetrical in males. The intra- and inter-observer agreements were moderate to almost perfect level (Kappa = 0.67−0.93, ICC = 0.74−0.94). Conclusion: The image-based and deep learning methods were efficient in sex determination. However, the deep learning technique performed the best among the three methods due to its high accuracy and rapid analysis. In this study, deep learning technology was found to be a viable option for remote consultations regarding sex determination in the Thai population.
APA, Harvard, Vancouver, ISO, and other styles
46

DeKraker, Jordan, Roy AM Haast, Mohamed D. Yousif, Bradley Karat, Jonathan C. Lau, Stefan Köhler, and Ali R. Khan. "Automated hippocampal unfolding for morphometry and subfield segmentation with HippUnfold." eLife 11 (December 15, 2022). http://dx.doi.org/10.7554/elife.77945.

Full text
Abstract:
Like neocortical structures, the archicortical hippocampus differs in its folding patterns across individuals. Here, we present an automated and robust BIDS-App, HippUnfold, for defining and indexing individual-specific hippocampal folding in MRI, analogous to popular tools used in neocortical reconstruction. Such tailoring is critical for inter-individual alignment, with topology serving as the basis for homology. This topological framework enables qualitatively new analyses of morphological and laminar structure in the hippocampus or its subfields. It is critical for refining current neuroimaging analyses at a meso- as well as micro-scale. HippUnfold uses state-of-the-art deep learning combined with previously developed topological constraints to generate uniquely folded surfaces to fit a given subject's hippocampal conformation. It is designed to work with commonly employed sub-millimetric MRI acquisitions, with possible extension to microscopic resolution. In this paper we describe the power of HippUnfold in feature extraction, and highlight its unique value compared to several extant hippocampal subfield analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Daeschler, Simeon Christian, Marie-Hélène Bourget, Dorsa Derakhshan, Vasudev Sharma, Stoyan Ivaylov Asenov, Tessa Gordon, Julien Cohen-Adad, and Gregory Howard Borschel. "Rapid, automated nerve histomorphometry through open-source artificial intelligence." Scientific Reports 12, no. 1 (April 8, 2022). http://dx.doi.org/10.1038/s41598-022-10066-6.

Full text
Abstract:
AbstractWe aimed to develop and validate a deep learning model for automated segmentation and histomorphometry of myelinated peripheral nerve fibers from light microscopic images. A convolutional neural network integrated in the AxonDeepSeg framework was trained for automated axon/myelin segmentation using a dataset of light-microscopic cross-sectional images of osmium tetroxide-stained rat nerves including various axonal regeneration stages. In a second dataset, accuracy of automated segmentation was determined against manual axon/myelin labels. Automated morphometry results, including axon diameter, myelin sheath thickness and g-ratio were compared against manual straight-line measurements and morphometrics extracted from manual labels with AxonDeepSeg as a reference standard. The neural network achieved high pixel-wise accuracy for nerve fiber segmentations with a mean (± standard deviation) ground truth overlap of 0.93 (± 0.03) for axons and 0.99 (± 0.01) for myelin sheaths, respectively. Nerve fibers were identified with a sensitivity of 0.99 and a precision of 0.97. For each nerve fiber, the myelin thickness, axon diameter, g-ratio, solidity, eccentricity, orientation, and individual x -and y-coordinates were determined automatically. Compared to manual morphometry, automated histomorphometry showed superior agreement with the reference standard while reducing the analysis time to below 2.5% of the time needed for manual morphometry. This open-source convolutional neural network provides rapid and accurate morphometry of entire peripheral nerve cross-sections. Given its easy applicability, it could contribute to significant time savings in biomedical research while extracting unprecedented amounts of objective morphologic information from large image datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

Cui, Yue, Chao Li, Bing Liu, Jing Sui, Ming Song, Jun Chen, Yunchun Chen, et al. "Consistent brain structural abnormalities and multisite individualised classification of schizophrenia using deep neural networks." British Journal of Psychiatry, February 11, 2022, 1–8. http://dx.doi.org/10.1192/bjp.2022.22.

Full text
Abstract:
Background Previous analyses of grey and white matter volumes have reported that schizophrenia is associated with structural changes. Deep learning is a data-driven approach that can capture highly compact hierarchical non-linear relationships among high-dimensional features, and therefore can facilitate the development of clinical tools for making a more accurate and earlier diagnosis of schizophrenia. Aims To identify consistent grey matter abnormalities in patients with schizophrenia, 662 people with schizophrenia and 613 healthy controls were recruited from eight centres across China, and the data from these independent sites were used to validate deep-learning classifiers. Method We used a prospective image-based meta-analysis of whole-brain voxel-based morphometry. We also automatically differentiated patients with schizophrenia from healthy controls using combined grey matter, white matter and cerebrospinal fluid volumetric features, incorporated a deep neural network approach on an individual basis, and tested the generalisability of the classification models using independent validation sites. Results We found that statistically reliable schizophrenia-related grey matter abnormalities primarily occurred in regions that included the superior temporal gyrus extending to the temporal pole, insular cortex, orbital and middle frontal cortices, middle cingulum and thalamus. Evaluated using leave-one-site-out cross-validation, the performance of the classification of schizophrenia achieved by our findings from eight independent research sites were: accuracy, 77.19–85.74%; sensitivity, 75.31–89.29% and area under the receiver operating characteristic curve, 0.797–0.909. Conclusions These results suggest that, by using deep-learning techniques, multidimensional neuroanatomical changes in schizophrenia are capable of robustly discriminating patients with schizophrenia from healthy controls, findings which could facilitate clinical diagnosis and treatment in schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
49

Ali, Mohammed A. S., Kaspar Hollo, Tõnis Laasfeld, Jane Torp, Maris-Johanna Tahk, Ago Rinken, Kaupo Palo, Leopold Parts, and Dmytro Fishman. "ArtSeg—Artifact segmentation and removal in brightfield cell microscopy images without manual pixel-level annotations." Scientific Reports 12, no. 1 (July 6, 2022). http://dx.doi.org/10.1038/s41598-022-14703-y.

Full text
Abstract:
AbstractBrightfield cell microscopy is a foundational tool in life sciences. The acquired images are prone to contain visual artifacts that hinder downstream analysis, and automatically removing them is therefore of great practical interest. Deep convolutional neural networks are state-of-the-art for image segmentation, but require pixel-level annotations, which are time-consuming to produce. Here, we propose ScoreCAM-U-Net, a pipeline to segment artifactual regions in brightfield images with limited user input. The model is trained using only image-level labels, so the process is faster by orders of magnitude compared to pixel-level annotation, but without substantially sacrificing the segmentation performance. We confirm that artifacts indeed exist with different shapes and sizes in three different brightfield microscopy image datasets, and distort downstream analyses such as nuclei segmentation, morphometry and fluorescence intensity quantification. We then demonstrate that our automated artifact removal ameliorates this problem. Such rapid cleaning of acquired images using the power of deep learning models is likely to become a standard step for all large scale microscopy experiments.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Ji, Yibo Wang, Eric D. Donarski, Tanjin T. Toma, Madeline T. Miles, Scott T. Acton, and Andreas Gahlmann. "BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations." npj Biofilms and Microbiomes 8, no. 1 (December 18, 2022). http://dx.doi.org/10.1038/s41522-022-00362-4.

Full text
Abstract:
AbstractAccurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography