Journal articles on the topic 'Automated Segmentation Method'

To see the other types of publications on this topic, follow the link: Automated Segmentation Method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automated Segmentation Method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Harkey, Matthew S., Nicholas Michel, Christopher Kuenze, Ryan Fajardo, Matt Salzler, Jeffrey B. Driban, and Ilker Hacihaliloglu. "Validating a Semi-Automated Technique for Segmenting Femoral Articular Cartilage on Ultrasound Images." CARTILAGE 13, no. 2 (April 2022): 194760352210930. http://dx.doi.org/10.1177/19476035221093069.

Full text
Abstract:
Objective To validate a semi-automated technique to segment ultrasound-assessed femoral cartilage without compromising segmentation accuracy to a traditional manual segmentation technique in participants with an anterior cruciate ligament injury (ACL). Design We recruited 27 participants with a primary unilateral ACL injury at a pre-operative clinic visit. One investigator performed a transverse suprapatellar ultrasound scan with the participant’s ACL injured knee in maximum flexion. Three femoral cartilage ultrasound images were recorded. A single expert reader manually segmented the femoral cartilage cross-sectional area in each image. In addition, we created a semi-automatic program to segment the cartilage using a random walker-based method. We quantified the average cartilage thickness and echo-intensity for the manual and semi-automated segmentations. Intraclass correlation coefficients (ICC2,k) and Bland-Altman plots were used to validate the semi-automated technique to the manual segmentation for assessing average cartilage thickness and echo-intensity. A dice correlation coefficient was used to quantify the overlap between the segmentations created with the semi-automated and manual techniques. Results For average cartilage thickness, there was excellent reliability (ICC2,k = 0.99) and a small mean difference (+0.8%) between the manual and semi-automated segmentations. For average echo-intensity, there was excellent reliability (ICC2,k = 0.97) and a small mean difference (−2.5%) between the manual and semi-automated segmentations. The average dice correlation coefficient between the manual segmentation and semi-automated segmentation was 0.90, indicating high overlap between techniques. Conclusions Our novel semi-automated segmentation technique is a valid method that requires less technical expertise and time than manual segmentation in patients after ACL injury.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yang, Yihao Chen, Hao Yuan, and Cheng Wu. "An automated learning method of semantic segmentation for train autonomous driving environment understanding." International Journal of Advances in Intelligent Informatics 10, no. 1 (February 29, 2024): 148. http://dx.doi.org/10.26555/ijain.v10i1.1521.

Full text
Abstract:
One of the major reasons for the explosion of autonomous driving in recent years is the great development of computer vision. As one of the most fundamental and challenging problems in autonomous driving, environment understanding has been widely studied. It directly determines whether the entire in-vehicle system can effectively identify surrounding objects of vehicles and make correct path planning. Semantic segmentation is the most important means of environment understanding among the many image recognition algorithms used in autonomous driving. However, the success of semantic segmentation models is highly dependent on human expertise in data preparation and hyperparameter optimization, and the tedious process of training is repeated over and over for each new scene. Automated machine learning (AutoML) is a research area for this problem that aims to automate the development of end-to-end ML models. In this paper, we propose an automatic learning method for semantic segmentation based on reinforcement learning (RL), which can realize automatic selection of training data and guide automatic training of semantic segmentation. The results show that our scheme converges faster and has higher accuracy than researchers manually training semantic segmentation models, while requiring no human involvement.
APA, Harvard, Vancouver, ISO, and other styles
3

Kemnitz, Jana, Christian F. Baumgartner, Felix Eckstein, Akshay Chaudhari, Anja Ruhdorfer, Wolfgang Wirth, Sebastian K. Eder, and Ender Konukoglu. "Clinical evaluation of fully automated thigh muscle and adipose tissue segmentation using a U-Net deep learning architecture in context of osteoarthritic knee pain." Magnetic Resonance Materials in Physics, Biology and Medicine 33, no. 4 (December 23, 2019): 483–93. http://dx.doi.org/10.1007/s10334-019-00816-5.

Full text
Abstract:
Abstract Objective Segmentation of thigh muscle and adipose tissue is important for the understanding of musculoskeletal diseases such as osteoarthritis. Therefore, the purpose of this work is (a) to evaluate whether a fully automated approach provides accurate segmentation of muscles and adipose tissue cross-sectional areas (CSA) compared with manual segmentation and (b) to evaluate the validity of this method based on a previous clinical study. Materials and methods The segmentation method is based on U-Net architecture trained on 250 manually segmented thighs from the Osteoarthritis Initiative (OAI). The clinical evaluation is performed on a hold-out test set bilateral thighs of 48 subjects with unilateral knee pain. Results The segmentation time of the method is < 1 s and demonstrated high agreement with the manual method (dice similarity coeffcient: 0.96 ± 0.01). In the clinical study, the automated method shows that similar to manual segmentation (− 5.7 ± 7.9%, p < 0.001, effect size: 0.69), painful knees display significantly lower quadriceps CSAs than contralateral painless knees (− 5.6 ± 7.6%, p < 0.001, effect size: 0.73). Discussion Automated segmentation of thigh muscle and adipose tissues has high agreement with manual segmentations and can replicate the effect size seen in a clinical study on osteoarthritic pain.
APA, Harvard, Vancouver, ISO, and other styles
4

Buser, Myrthe A. D., Alida F. W. van der Steeg, Marc H. W. A. Wijnen, Matthijs Fitski, Harm van Tinteren, Marry M. van den Heuvel-Eibrink, Annemieke S. Littooij, and Bas H. M. van der Velden. "Radiologic versus Segmentation Measurements to Quantify Wilms Tumor Volume on MRI in Pediatric Patients." Cancers 15, no. 7 (April 1, 2023): 2115. http://dx.doi.org/10.3390/cancers15072115.

Full text
Abstract:
Wilms tumor is a common pediatric solid tumor. To evaluate tumor response to chemotherapy and decide whether nephron-sparing surgery is possible, tumor volume measurements based on magnetic resonance imaging (MRI) are important. Currently, radiological volume measurements are based on measuring tumor dimensions in three directions. Manual segmentation-based volume measurements might be more accurate, but this process is time-consuming and user-dependent. The aim of this study was to investigate whether manual segmentation-based volume measurements are more accurate and to explore whether these segmentations can be automated using deep learning. We included the MRI images of 45 Wilms tumor patients (age 0–18 years). First, we compared radiological tumor volumes with manual segmentation-based tumor volume measurements. Next, we created an automated segmentation method by training a nnU-Net in a five-fold cross-validation. Segmentation quality was validated by comparing the automated segmentation with the manually created ground truth segmentations, using Dice scores and the 95th percentile of the Hausdorff distances (HD95). On average, manual tumor segmentations result in larger tumor volumes. For automated segmentation, the median dice was 0.90. The median HD95 was 7.2 mm. We showed that radiological volume measurements underestimated tumor volume by about 10% when compared to manual segmentation-based volume measurements. Deep learning can potentially be used to replace manual segmentation to benefit from accurate volume measurements without time and observer constraints.
APA, Harvard, Vancouver, ISO, and other styles
5

Matin-Mann, Farnaz, Ziwen Gao, Chunjiang Wei, Felix Repp, Eralp-Niyazi Artukarslan, Samuel John, Dorian Alcacer Labrador, Thomas Lenarz, and Verena Scheper. "Development and In-Silico and Ex-Vivo Validation of a Software for a Semi-Automated Segmentation of the Round Window Niche to Design a Patient Specific Implant to Treat Inner Ear Disorders." Journal of Imaging 9, no. 2 (February 20, 2023): 51. http://dx.doi.org/10.3390/jimaging9020051.

Full text
Abstract:
The aim of this study was to develop and validate a semi-automated segmentation approach that identifies the round window niche (RWN) and round window membrane (RWM) for use in the development of patient individualized round window niche implants (RNI) to treat inner ear disorders. Twenty cone beam computed tomography (CBCT) datasets of unilateral temporal bones of patients were included in the study. Defined anatomical landmarks such as the RWM were used to develop a customized 3D Slicer™ plugin for semi-automated segmentation of the RWN. Two otolaryngologists (User 1 and User 2) segmented the datasets manually and semi-automatically using the developed software. Both methods were compared in-silico regarding the resulting RWM area and RWN volume. Finally, the developed software was validated ex-vivo in N = 3 body donor implantation tests with additively manufactured RNI. The independently segmented temporal bones of the different Users showed a strong consistency in the volume of the RWN and the area of the RWM. The volume of the semi-automated RWN segmentations were 48 ± 11% smaller on average than the manual segmentations and the area of the RWM of the semi-automated segmentations was 21 ± 17% smaller on average than the manual segmentation. All additively manufactured implants, based on the semi-automated segmentation method could be implanted successfully in a pressure-tight fit into the RWN. The implants based on the manual segmentations failed to fit into the RWN and this suggests that the larger manual segmentations were over-segmentations. This study presents a semi-automated approach for segmenting the RWN and RWM in temporal bone CBCT scans that is efficient, fast, accurate, and not dependent on trained users. In addition, the manual segmentation, often positioned as the gold-standard, actually failed to pass the implantation validation.
APA, Harvard, Vancouver, ISO, and other styles
6

Sunoqrot, Mohammed R. S., Kirsten M. Selnæs, Elise Sandsmark, Gabriel A. Nketiah, Olmo Zavala-Romero, Radka Stoyanova, Tone F. Bathen, and Mattijs Elschot. "A Quality Control System for Automated Prostate Segmentation on T2-Weighted MRI." Diagnostics 10, no. 9 (September 18, 2020): 714. http://dx.doi.org/10.3390/diagnostics10090714.

Full text
Abstract:
Computer-aided detection and diagnosis (CAD) systems have the potential to improve robustness and efficiency compared to traditional radiological reading of magnetic resonance imaging (MRI). Fully automated segmentation of the prostate is a crucial step of CAD for prostate cancer, but visual inspection is still required to detect poorly segmented cases. The aim of this work was therefore to establish a fully automated quality control (QC) system for prostate segmentation based on T2-weighted MRI. Four different deep learning-based segmentation methods were used to segment the prostate for 585 patients. First order, shape and textural radiomics features were extracted from the segmented prostate masks. A reference quality score (QS) was calculated for each automated segmentation in comparison to a manual segmentation. A least absolute shrinkage and selection operator (LASSO) was trained and optimized on a randomly assigned training dataset (N = 1756, 439 cases from each segmentation method) to build a generalizable linear regression model based on the radiomics features that best estimated the reference QS. Subsequently, the model was used to estimate the QSs for an independent testing dataset (N = 584, 146 cases from each segmentation method). The mean ± standard deviation absolute error between the estimated and reference QSs was 5.47 ± 6.33 on a scale from 0 to 100. In addition, we found a strong correlation between the estimated and reference QSs (rho = 0.70). In conclusion, we developed an automated QC system that may be helpful for evaluating the quality of automated prostate segmentations.
APA, Harvard, Vancouver, ISO, and other styles
7

Clark, A. E., B. Biffi, R. Sivera, A. Dall'Asta, L. Fessey, T. L. Wong, G. Paramasivam, D. Dunaway, S. Schievano, and C. C. Lees. "Developing and testing an algorithm for automatic segmentation of the fetal face from three-dimensional ultrasound images." Royal Society Open Science 7, no. 11 (November 2020): 201342. http://dx.doi.org/10.1098/rsos.201342.

Full text
Abstract:
Fetal craniofacial abnormalities are challenging to detect and diagnose on prenatal ultrasound (US). Image segmentation and computer analysis of three-dimensional US volumes of the fetal face may provide an objective measure to quantify fetal facial features and identify abnormalities. We have developed and tested an atlas-based partially automated facial segmentation algorithm; however, the volumes require additional manual segmentation (MS), which is time and labour intensive and may preclude this method from clinical adoption. These manually refined segmentations can then be used as a reference (atlas) by the partially automated segmentation algorithm to improve algorithmic performance with the aim of eliminating the need for manual refinement and developing a fully automated system. This study assesses the inter- and intra-operator variability of MS and tests an optimized version of our automatic segmentation (AS) algorithm. The manual refinements of 15 fetal faces performed by three operators and repeated by one operator were assessed by Dice score, average symmetrical surface distance and volume difference. The performance of the partially automatic algorithm with difference size atlases was evaluated by Dice score and computational time. Assessment of the manual refinements showed low inter- and intra-operator variability demonstrating its suitability for optimizing the AS algorithm. The algorithm showed improved performance following an increase in the atlas size in turn reducing the need for manual refinement.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Philon, Thanh An Nguyen, and Yong Zeng. "Segmentation of design protocol using EEG." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 1 (April 3, 2018): 11–23. http://dx.doi.org/10.1017/s0890060417000622.

Full text
Abstract:
AbstractDesign protocol data analysis methods form a well-known set of techniques used by design researchers to further understand the conceptual design process. Verbal protocols are a popular technique used to analyze design activities. However, verbal protocols are known to have some limitations. A recurring problem in design protocol analysis is to segment and code protocol data into logical and semantic units. This is usually a manual step and little work has been done on fully automated segmentation techniques. Physiological signals such as electroencephalograms (EEG) can provide assistance in solving this problem. Such problems are typical inverse problems that occur in the line of research. A thought process needs to be reconstructed from its output, an EEG signal. We propose an EEG-based method for design protocol coding and segmentation. We provide experimental validation of our methods and compare manual segmentation by domain experts to algorithmic segmentation using EEG. The best performing automated segmentation method (when manual segmentation is the baseline) is found to have an average deviation from manual segmentations of 2 s. Furthermore, EEG-based segmentation can identify cognitive structures that simple observation of design protocols cannot. EEG-based segmentation does not replace complex domain expert segmentation but rather complements it. Techniques such as verbal protocols are known to fail in some circumstances. EEG-based segmentation has the added feature that it is fully automated and can be readily integrated in engineering systems and subsystems. It is effectively a window into the mind.
APA, Harvard, Vancouver, ISO, and other styles
9

Nishiyama, Daisuke, Hiroshi Iwasaki, Takaya Taniguchi, Daisuke Fukui, Manabu Yamanaka, Teiji Harada, and Hiroshi Yamada. "Deep generative models for automated muscle segmentation in computed tomography scanning." PLOS ONE 16, no. 9 (September 10, 2021): e0257371. http://dx.doi.org/10.1371/journal.pone.0257371.

Full text
Abstract:
Accurate gluteus medius (GMd) volume evaluation may aid in the analysis of muscular atrophy states and help gain an improved understanding of patient recovery via rehabilitation. However, the segmentation of muscle regions in GMd images for cubic muscle volume assessment is time-consuming and labor-intensive. This study automated GMd-region segmentation from the computed tomography (CT) images of patients diagnosed with hip osteoarthritis using deep learning and evaluated the segmentation accuracy. To this end, 5250 augmented pairs of training data were obtained from five participants, and a conditional generative adversarial network was used to identify the relationships between the image pairs. Using the preserved test datasets, the results of automatic segmentation with the trained deep learning model were compared to those of manual segmentation in terms of the dice similarity coefficient (DSC), volume similarity (VS), and shape similarity (MS). As observed, the average DSC values for automatic and manual segmentations were 0.748 and 0.812, respectively, with a significant difference (p < 0.0001); the average VS values were 0.247 and 0.203, respectively, with no significant difference (p = 0.069); and the average MS values were 1.394 and 1.156, respectively, with no significant difference (p = 0.308). The GMd volumes obtained by automatic and manual segmentation were 246.2 cm3 and 282.9 cm3, respectively. The noninferiority of the DSC obtained by automatic segmentation was verified against that obtained by manual segmentation. Accordingly, the proposed GAN-based automatic GMd-segmentation technique is confirmed to be noninferior to manual segmentation. Therefore, the findings of this research confirm that the proposed method not only reduces time and effort but also facilitates accurate assessment of the cubic muscle volume.
APA, Harvard, Vancouver, ISO, and other styles
10

G, Mohandass, Hari Krishnan G, and Hemalatha R J. "An approach to automated retinal layer segmentation in SDOCT images." International Journal of Engineering & Technology 7, no. 2.25 (May 3, 2018): 56. http://dx.doi.org/10.14419/ijet.v7i2.25.12371.

Full text
Abstract:
The optical coherence tomography (OCT) imaging technique is a precise and well-known approach to the diagnosis of retinal layers. The pathological changes in the retina challenge the accuracy of computational segmentation approaches in the evaluation and identification of defects in the boundary layer. The layer segmentations and boundary detections are distorted by noise in the computation. In this work, we propose a fully automated segmentation algorithm using a denoising technique called the Boisterous Obscure Ratio (BOR) for human and mammal retina. First, the BOR is derived using noise detection, i.e., from the Robust Outlyingness Ratio (ROR). It is then applied to edge and layer detection using a gradient-based deformable contour model. Second, the image is vectorised. In this method, a cluster and column intensity grid is applied to identify and determine the unsegmented layers. Using the layer intensity and a region growth seed point algorithm, segmentation of the prominent layers is achieved. The automatic BOR method is an image segmentation process that determines the eight layers in retinal spectral domain optical coherence tomography images. The highlight of the BOR method is that the results produced are accurate, highly substantial, and effective, although time consuming.
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Xin, Chaoyue Liu, Hung Le Minh, Zhiwei Wang, Aichi Chien, and Kwang-Ting (Tim) Cheng. "An automated method for accurate vessel segmentation." Physics in Medicine and Biology 62, no. 9 (April 6, 2017): 3757–78. http://dx.doi.org/10.1088/1361-6560/aa6418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jaware, Tushar H., K. B. Khanchandani, and Anita Zurani. "An Accurate Automated Local Similarity Factor-Based Neural Tree Approach toward Tissue Segmentation of Newborn Brain MRI." American Journal of Perinatology 36, no. 11 (December 15, 2018): 1157–70. http://dx.doi.org/10.1055/s-0038-1675375.

Full text
Abstract:
Background Segmentation of brain MR images of neonates is a primary step for assessment of brain evolvement. Advanced segmentation techniques used for adult brain MRI are not companionable for neonates, due to extensive dissimilarities in tissue properties and head structure. Existing segmentation methods for neonates utilizes brain atlases or requires manual elucidation, which results into improper and atlas dependent segmentation. Objective The primary objective of this work is to develop fully automatic, atlas free, and robust system to segment and classify brain tissues of newborn infants from magnetic resonance images. Study Design In this study, we propose a fully automatic, atlas-free pipeline based Neural Tree approach for segmentation of newborn brain MRI which utilizes resourceful local resemblance factor such as concerning, connectivity, structure, and relative tissue location. Physical collaboration and uses of an atlas are not required in proposed method and at the same time skirting atlas-associated bias which results in improved segmentation. Proposed technique segments and classify brain tissues both at global and tissue level. Results We examined our results through visual assessment by neonatologists and quantitative comparisons that show first-rate concurrence with proficient manual segmentations. The implementation results of the proposed technique provided a good overall accuracy of 91.82% for the segmentation of brain tissues as compared with other methods. Conclusion The pipelined-based neural tree approach along with local similarity factor segments and classify brain tissues. The proposed automated system have higher dice similarity coefficient as well as computational speed.
APA, Harvard, Vancouver, ISO, and other styles
13

Halawa, Abdelrahman, Shehab Gamalel-Din, and Abdurrahman Nasr. "EXPLOITING BERT FOR MALFORMED SEGMENTATION DETECTION TO IMPROVE SCIENTIFIC WRITINGS." Applied Computer Science 19, no. 2 (June 30, 2023): 126–41. http://dx.doi.org/10.35784/acs-2023-20.

Full text
Abstract:
Writing a well-structured scientific documents, such as articles and theses, is vital for comprehending the document's argumentation and understanding its messages. Furthermore, it has an impact on the efficiency and time required for studying the document. Proper document segmentation also yields better results when employing automated Natural Language Processing (NLP) manipulation algorithms, including summarization and other information retrieval and analysis functions. Unfortunately, inexperienced writers, such as young researchers and graduate students, often struggle to produce well-structured professional documents. Their writing frequently exhibits improper segmentations or lacks semantically coherent segments, a phenomenon referred to as "mal-segmentation." Examples of mal-segmentation include improper paragraph or section divisions and unsmooth transitions between sentences and paragraphs. This research addresses the issue of mal-segmentation in scientific writing by introducing an automated method for detecting mal-segmentations, and utilizing Sentence Bidirectional Encoder Representations from Transformers (sBERT) as an encoding mechanism. The experimental results section shows a promising results for the detection of mal-segmentation using the sBERT technique.
APA, Harvard, Vancouver, ISO, and other styles
14

Bowes, Michael Antony, Gwenael Alain Guillard, Graham Richard Vincent, Alan Donald Brett, Christopher Brian Hartley Wolstenholme, and Philip Gerard Conaghan. "Precision, Reliability, and Responsiveness of a Novel Automated Quantification Tool for Cartilage Thickness: Data from the Osteoarthritis Initiative." Journal of Rheumatology 47, no. 2 (April 15, 2019): 282–89. http://dx.doi.org/10.3899/jrheum.180541.

Full text
Abstract:
Objective.Accurate automated segmentation of cartilage should provide rapid reliable outcomes for both epidemiological studies and clinical trials. We aimed to assess the precision and responsiveness of cartilage thickness measured with careful manual segmentation or a novel automated technique.Methods.Agreement of automated segmentation was assessed against 2 manual segmentation datasets: 379 magnetic resonance images manually segmented in-house (training set), and 582 from the Osteoarthritis Initiative with data available at 0, 1, and 2 years (biomarkers set). Agreement of mean thickness was assessed using Bland-Altman plots, and change with pairwise Student t test in the central medial femur (cMF) and tibia regions (cMT). Repeatability was assessed on a set of 19 knees imaged twice on the same day. Responsiveness was assessed using standardized response means (SRM).Results.Agreement of manual versus automated methods was excellent with no meaningful systematic bias (training set: cMF bias 0.1 mm, 95% CI ± 0.35; biomarkers set: bias 0.1 mm ± 0.4). The smallest detectable difference for cMF was 0.13 mm (coefficient of variation 3.1%), and for cMT 0.16 mm(2.65%). Reported change using manual segmentations in the cMF region at 1 year was −0.031 mm (95% CI −0.022, −0.039), p < 10−4, SRM −0.31 (−0.23, −0.38); and at 2 years was −0.071 (−0.058, −0.085), p < 10−4, SRM −0.43 (−0.36, −0.49). Reported change using automated segmentations in the cMF at 1 year was −0.059 (−0.047, −0.071), p < 10−4, SRM −0.41 (−0.34, −0.48); and at 2 years was −0.14 (−0.123, −0.157, p < 10−4, SRM −0.67 (−0.6, −0.72).Conclusion.A novel cartilage segmentation method provides highly accurate and repeatable measures with cartilage thickness measurements comparable to those of careful manual segmentation, but with improved responsiveness.
APA, Harvard, Vancouver, ISO, and other styles
15

Pociask, Elżbieta, Krzysztof Piotr Malinowski, Magdalena Ślęzak, Joanna Jaworek-Korjakowska, Wojciech Wojakowski, and Tomasz Roleder. "Fully Automated Lumen Segmentation Method for Intracoronary Optical Coherence Tomography." Journal of Healthcare Engineering 2018 (December 26, 2018): 1–13. http://dx.doi.org/10.1155/2018/1414076.

Full text
Abstract:
Background. Optical coherence tomography (OCT) is an innovative imaging technique that generates high-resolution intracoronary images. In the last few years, the need for more precise analysis regarding coronary artery disease to achieve optimal treatment has made intravascular imaging an area of primary importance in interventional cardiology. One of the main challenges in OCT image analysis is the accurate detection of lumen which is significant for the further prognosis. Method. In this research, we present a new approach to the segmentation of lumen in OCT images. The proposed work is focused on designing an efficient automatic algorithm containing the following steps: preprocessing (artifacts removal: speckle noise, circular rings, and guide wire), conversion between polar and Cartesian coordinates, and segmentation algorithm. Results. The implemented method was tasted on 667 OCT frames. The lumen border was extracted with a high correlation compared to the ground truth: 0.97 ICC (0.97–0.98). Conclusions. Proposed algorithm allows for fully automated lumen segmentation on optical coherence tomography images. This tool may be applied to automated quantitative lumen analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

MASSOPTIER, LAURENT, AVISHKAR MISRA, ARCOT SOWMYA, and SERGIO CASCIARO. "COMBINING GRAPH-CUT TECHNIQUE AND ANATOMICAL KNOWLEDGE FOR AUTOMATIC SEGMENTATION OF LUNGS AFFECTED BY DIFFUSE PARENCHYMAL DISEASE IN HRCT IMAGES." International Journal of Image and Graphics 11, no. 04 (October 2011): 509–29. http://dx.doi.org/10.1142/s0219467811004202.

Full text
Abstract:
Accurate and automated lung segmentation in high-resolution computed tomography (HRCT) is highly challenged by the presence of pathologies affecting lung parenchyma appearance and borders. The algorithm presented employs an anatomical model-driven approach and systematic incremental knowledge acquisition to produce coarse lung delineation, used as initialization for the graph-cut algorithm. The proposed method is evaluated on a 49 HRCT cases dataset including various lung disease patterns. The accuracy of the method is assessed using dice similarity coefficient (DSC) and shape differentiation metrics (d mean , d rms ), by comparing the outputs of automatic lung segmentations and manual ones. The proposed automatic method demonstrates high segmentation accuracy ( DSC = 96.64%, d mean = 1.75 mm, d rms = 3.27 mm) with low variation that depends on the lung disease pattern. It also presents good improvement over the initial lung segmentation (Δ DSC = 4.74%, Δd mean = -3.67 mm, Δd rms = -6.25 mm), including impressive amelioration (maximum values of Δ DSC = 58.22% and Δd mean = -78.66 mm) when the anatomy-driven algorithm reaches its limit. Segmentation evaluation shows that the method can accurately segment lungs even in the presence of disease patterns, with some limitations in the apices and bases of lungs. Therefore, the developed automatic segmentation method is a good candidate for the first stage of a computer-aided diagnosis system for diffuse lung diseases.
APA, Harvard, Vancouver, ISO, and other styles
17

Bouzid-Daho, Abdellatif, Naima Sofi, Schahrazad Soltane, and Patrick Siarry. "Automated detection in microscopic images using segmentation." Brazilian Journal of Technology 7, no. 2 (April 30, 2024): e69317. http://dx.doi.org/10.38152/bjtv7n2-003.

Full text
Abstract:
In this paper, we present a segmentation clusteringbased approach for automated object detection. This paper deals with the segmentation and classification of blood cells for the purpose of detecting leukemia (abnormal blood cells). After the image acquisition and the preprocessing step, we proceeded to the application of the k-means method. In order to show the interest of the proposed approach, we present the different cancerous regions identified with their characteristics for biomedical diagnostic aid. The proposed method is tested on image dataset and achieves 98% segmentation accuracy. These results show that our approach offers encouraging performance and best automatic leukemia detection. The proposed system is successfully implemented in Matlab, experimental results demonstrate that our approach offers encouraging performance and better quality automatic leukemia detection.
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Yusen, Xingji Jin, Timo Pukkala, and Fengri Li. "A Comparison of Four Methods for Automatic Delineation of Tree Stands from Grids of LiDAR Metrics." Remote Sensing 14, no. 24 (December 7, 2022): 6192. http://dx.doi.org/10.3390/rs14246192.

Full text
Abstract:
Increased use of laser scanning in forest inventories is leading to the adoption and development of automated stand delineation methods. The most common categories of these methods are region merging and region growing. However, recent literature proposes alternative methods that are based on the ideas of cellular automata, self-organizing maps, and combinatorial optimization. The studies where these methods have been described suggest that the new methods are potential options for the automated segmentation of a forest into homogeneous stands. However, no studies are available that compare the new methods to each other and to the traditional region-merging and region-growing algorithms. This study provided a detailed comparison of four methods using LiDAR metrics calculated for grids of 5 m by 5 m raster cells as the data. The tested segmentation methods were region growing (RG), cellular automaton (CA), self-organizing map (SOM), and simulated annealing (SA), which is a heuristic algorithm developed for combinatorial optimization. The case study area was located in the Heilongjiang province of northeast China. The LiDAR data were collected from an unmanned aerial vehicle for three 1500-ha test areas. The proportion of variation in the LiDAR metrics that was explained by the segmentation was mostly the best for the SA method. The RG method produced more heterogeneous segments than the other methods. The CA method resulted in the smallest number of segments and the largest average segment area. The proportion of small segments (smaller than 0.3 ha) was the highest in the RG method while the SA method always produced the fewest small stands. The shapes of the segments were the best (most circular) for the CA and SA methods, but the shape metrics were good for all methods. The results of the study suggest that CA, SOM, and SA may all outperform RG in automated stand delineation.
APA, Harvard, Vancouver, ISO, and other styles
19

Xiong, Hui, Laith R. Sultan, Theodore W. Cary, Susan M. Schultz, Ghizlane Bouzghar, and Chandra M. Sehgal. "The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images." Ultrasound 25, no. 2 (January 25, 2017): 98–106. http://dx.doi.org/10.1177/1742271x17690425.

Full text
Abstract:
Purpose To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Materials and methods Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( Oa) between the margins, and area under the ROC curves ( Az). Results The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R2 of 0.91). Oa was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall Oa between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. Az for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of Az between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. Conclusion The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.
APA, Harvard, Vancouver, ISO, and other styles
20

Golkar, Ehsan, Hossein Rabbani, and Ashrani Aizzuddin Abd. Rahni. "Inter-subject Registration-based Segmentation of Thoracic-Abdominal Organs in 4 Dimensional Magnetic Resonance Imaging." Jurnal Kejuruteraan 33, no. 4 (November 30, 2021): 1045–51. http://dx.doi.org/10.17576/jkukm-2021-33(4)-26.

Full text
Abstract:
4 Dimensional Magnetic Resonance Imaging (4D MRI) is currently gaining attention as an imaging modality which is able to capture inter-cycle variability of respiratory motion. Such information is beneficial for example in radiotherapy planning and delivery. In the latter case, there may be a need for organ segmentation, however 4D MRI are of low contrast, which complicates automated organ segmentation. This paper proposes a multi-subject thoracic-abdominal organ segmentation propagation scheme for 4D MRI. The proposed scheme is registration based, hence different combinations of deformation and similarity measures are used. For deformation we used either just an affine transformation or additionally free form deformation on top of an affine transform. For similarity measure, either the sum of squared intensity differences or normalised mutual information is used. Segmentations from multiple subjects are registered to a target MRI and the average segmentation is found. The result of the method is compared with the ground truth which is generated from a semi-automated segmentation method. The results are quantified using the Jaccard index and Hausdorff distance. The results show that using free form deformation with a sum of squared intensity differences similarity measure produces an acceptable segmentation of the organs with an overall Jaccard index of over 0.5. Hence, the proposed scheme can be used as a basis for automated organ segmentation in 4D MRI.
APA, Harvard, Vancouver, ISO, and other styles
21

Tran, Carol, Orit Glenn, Christopher Hess, and Andreas Rauschecker. "4252 Automated Fetal Brain Volumetry on Clinical Fetal MRI Using Convolutional Neural Network." Journal of Clinical and Translational Science 4, s1 (June 2020): 45–46. http://dx.doi.org/10.1017/cts.2020.169.

Full text
Abstract:
OBJECTIVES/GOALS: We seek to develop an automated deep learning-based method for segmentation and volumetric quantification of the fetal brain on T2-weighted fetal MRIs. We will evaluate the performance of the algorithm by comparing it to gold standard manual segmentations. The method will be used to create a normative sample of brain volumes across gestational ages. METHODS/STUDY POPULATION: We will adapt a U-Net convolutional neural network architecture for fetal brain MRIs using 3D volumes. After re-sampling 2D fetal brain acquisitions to 3mm3 3D volumes using linear interpolation, the network will be trained to perform automated brain segmentation on 40 randomly-sampled, normal fetal brain MRI scans of singleton pregnancies. Training will be performed in 3 acquisition planes (axial, coronal, sagittal). Performance will be evaluated on 10 test MRIs (in 3 acquisition planes, 30 total test samples) using Dice scores, compared to radiologists’ manual segmentations. The algorithm’s performance on measuring total brain volume will also be evaluated. RESULTS/ANTICIPATED RESULTS: Based on the success of prior U-net architectures for volumetric segmentation tasks in medical imaging (e.g. Duong et al., 2019), we anticipate that the convolutional neural network will accurately provide segmentations and associated volumetry of fetal brains in fractions of a second. We anticipate median Dice scores greater than 0.8 across our test sample. Once validated, the method will retrospectively generate a normative database of over 1500 fetal brain volumes across gestational ages (18 weeks to 30 weeks) collected at our institution. DISCUSSION/SIGNIFICANCE OF IMPACT: Quantitative estimates of brain volume, and deviations from normative data, would be a major advancement in objective clinical assessments of fetal MRI. Such data can currently only be obtained through laborious manual segmentations; automated deep learning methods have the potential to reduce the time and cost of this process.
APA, Harvard, Vancouver, ISO, and other styles
22

Jin, Felix Q., Anna E. Knight, Adela R. Cardones, Kathryn R. Nightingale, and Mark L. Palmeri. "Semi-automated weak annotation for deep neural network skin thickness measurement." Ultrasonic Imaging 43, no. 4 (May 11, 2021): 167–74. http://dx.doi.org/10.1177/01617346211014138.

Full text
Abstract:
Correctly calculating skin stiffness with ultrasound shear wave elastography techniques requires an accurate measurement of skin thickness. We developed and compared two algorithms, a thresholding method and a deep learning method, to measure skin thickness on ultrasound images. Here, we also present a framework for weakly annotating an unlabeled dataset in a time-effective manner to train the deep neural network. Segmentation labels for training were proposed using the thresholding method and validated with visual inspection by a human expert reader. We reduced decision ambiguity by only inspecting segmentations at the center A-line. This weak annotation approach facilitated validation of over 1000 segmentation labels in 2 hours. A lightweight deep neural network that segments entire 2D images was designed and trained on this weakly-labeled dataset. Averaged over six folds of cross-validation, segmentation accuracy was 57% for the thresholding method and 78% for the neural network. In particular, the network was better at finding the distal skin margin, which is the primary challenge for skin segmentation. Both algorithms have been made publicly available to aid future applications in skin characterization and elastography.
APA, Harvard, Vancouver, ISO, and other styles
23

Jiang, Huiyan, Shaojie Li, and Siqi Li. "Registration-Based Organ Positioning and Joint Segmentation Method for Liver and Tumor Segmentation." BioMed Research International 2018 (September 24, 2018): 1–11. http://dx.doi.org/10.1155/2018/8536854.

Full text
Abstract:
The automated segmentation of liver and tumor from CT images is of great importance in medical diagnoses and clinical treatment. However, accurate and automatic segmentation of liver and tumor is generally complicated due to the complex anatomical structures and low contrast. This paper proposes a registration-based organ positioning (ROP) and joint segmentation method for liver and tumor segmentation from CT images. First, a ROP method is developed to obtain liver’s bounding box accurately and efficiently. Second, a joint segmentation method based on fuzzy c-means (FCM) and extreme learning machine (ELM) is designed to perform coarse liver segmentation. Third, the coarse segmentation is regarded as the initial contour of active contour model (ACM) to refine liver boundary by considering the topological information. Finally, tumor segmentation is performed using another ELM. Experiments on two datasets demonstrate the performance advantages of our proposed method compared with other related works.
APA, Harvard, Vancouver, ISO, and other styles
24

Choi, Woorim, Chul-Ho Kim, Hyein Yoo, Hee Rim Yun, Da-Wit Kim, and Ji Wan Kim. "Development and validation of a reliable method for automated measurements of psoas muscle volume in CT scans using deep learning-based segmentation: a cross-sectional study." BMJ Open 14, no. 5 (May 2024): e079417. http://dx.doi.org/10.1136/bmjopen-2023-079417.

Full text
Abstract:
ObjectivesWe aimed to develop an automated method for measuring the volume of the psoas muscle using CT to aid sarcopenia research efficiently.MethodsWe used a data set comprising the CT scans of 520 participants who underwent health check-ups at a health promotion centre. We developed a psoas muscle segmentation model using deep learning in a three-step process based on the nnU-Net method. The automated segmentation method was evaluated for accuracy, reliability, and time required for the measurement.ResultsThe Dice similarity coefficient was used to compare the manual segmentation with automated segmentation; an average Dice score of 0.927 ± 0.019 was obtained, with no critical outliers. Our automated segmentation system had an average measurement time of 2 min 20 s ± 20 s, which was 48 times shorter than that of the manual measurement method (111 min 6 s ± 25 min 25 s).ConclusionWe have successfully developed an automated segmentation method to measure the psoas muscle volume that ensures consistent and unbiased estimates across a wide range of CT images.
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Seyoung, Kai Zhang, Jeeyeon Lee, Peter Haseok Kim, Amogh Hiremath, Salie Lee, Monica Yadav, et al. "Abstract 2595: Accelerated and precise tumor segmentation in NSCLC: A comparative analysis of automated ClickSeg and manual annotation for radiomics." Cancer Research 84, no. 6_Supplement (March 22, 2024): 2595. http://dx.doi.org/10.1158/1538-7445.am2024-2595.

Full text
Abstract:
Abstract Background: Radiomics models utilizing artificial intelligence are being explored as a potential biomarker in the field of oncology. Radiomics analysis requires segmentations of radiographic imaging. However, manual segmentation is a labor-intensive process that is time consuming, and acts as a major rate-limiting step. Thus the development of automated segmentation tools presents an opportunity for innovation in regards to efficiency and precision. Our study aims to explore the feasibility of autosegmentation in comparison with manual segmentation. Methods: A cohort of 105 stage III-IV non-small cell lung cancer patients receiving immunotherapy was examined, with a total of 168 lesions. We conducted a comprehensive comparative analysis of manual versus automated segmentation in time efficiency and segmentation quality, utilizing CT scans from the patients. Manual segmentation was performed by three physicians annotating in consensus using LIFEx software v7.3.0 (IMIV/CEA, Orsay, France). For a subset of randomly selected patients, annotation time was recorded as a reference point for the automated segmentation. In addition, automated segmentation was performed on all 168 lesions using ClickSeg from PictureHealth, a click-based interactive segmentation algorithm optimized for NSCLC lesions. Segmentation quality was evaluated using the Dice similarity coefficient to measure the concordance between ClickSeg outputs and consensus expert annotations. Results: Average manual annotation time of the tumor region was 15min 34sec, and ranged from 1min 2sec to 53min 12sec.. Average prediction time for the automated segmentation model was 1.94 seconds, and ranged from 0.71 seconds to 4.39 seconds. The median Dice coefficient, reflecting segmentation accuracy, was 0.72, highlighting the robust performance of the automated method. Conclusion: Our study demonstrates the efficient and accurate nature of automated segmentation supporting its potential integration into routine radiomic analysis procedures, streamlining the assessment of treatment response and ultimately improving patient care. Citation Format: Seyoung Lee, Kai Zhang, Jeeyeon Lee, Peter Haseok Kim, Amogh Hiremath, Salie Lee, Monica Yadav, Maria J. Chuchuca, Taegyu Um, Myungwoo Nam, Liam Il-Young Chung, Hye Sung Kim, Jisang Yu, Trie Arni Djunadi, Leeseul Kim, Youjin Oh, Sungmi Yoon, Zunairah Shah, Yuchan Kim, Ilene Hong, Grace Kang, Jessica Jang, Amy Cho, Soowon Lee, Cecilia Nam, Timothy Hong, Yuri S. Velichko, Anant Madabhushi, Nathaniel Braman, Young Kwang Chae. Accelerated and precise tumor segmentation in NSCLC: A comparative analysis of automated ClickSeg and manual annotation for radiomics [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 2595.
APA, Harvard, Vancouver, ISO, and other styles
26

Zemborain, Zane Zenon, Matias Soifer, Nadim S. Azar, Sofia Murillo, Hazem M. Mousa, Victor L. Perez, and Sina Farsiu. "Open-Source Automated Segmentation of Neuronal Structures in Corneal Confocal Microscopy Images of the Subbasal Nerve Plexus With Accuracy on Par With Human Segmentation." Cornea 42, no. 10 (July 7, 2023): 1309–19. http://dx.doi.org/10.1097/ico.0000000000003319.

Full text
Abstract:
Purpose: The aim of this study was to perform automated segmentation of corneal nerves and other structures in corneal confocal microscopy (CCM) images of the subbasal nerve plexus (SNP) in eyes with ocular surface diseases (OSDs). Methods: A deep learning–based 2-stage algorithm was designed to perform segmentation of SNP features. In the first stage, to address applanation artifacts, a generative adversarial network–enabled deep network was constructed to identify 3 neighboring corneal layers on each CCM image: epithelium, SNP, and stroma. This network was trained/validated on 470 images of each layer from 73 individuals. The segmented SNP regions were further classified in the second stage by another deep network as follows: background, nerve, neuroma, and immune cells. Twenty-one-fold cross-validation was used to assess the performance of the overall algorithm on a separate data set of 207 manually segmented SNP images from 43 patients with OSD. Results: For the background, nerve, neuroma, and immune cell classes, the Dice similarity coefficients of the proposed automatic method were 0.992, 0.814, 0.748, and 0.736, respectively. The performance metrics for automatic segmentations were statistically better or equal as compared to human segmentation. In addition, the resulting clinical metrics had good to excellent intraclass correlation coefficients between automatic and human segmentations. Conclusions: The proposed automatic method can reliably segment potential CCM biomarkers of OSD onset and progression with accuracy on par with human gradings in real clinical data, which frequently exhibited image acquisition artifacts. To facilitate future studies on OSD, we made our data set and algorithms freely available online as an open-source software package.
APA, Harvard, Vancouver, ISO, and other styles
27

Moëll, Mattias K., and Lloyd A. Donaldson. "COMPARISON OF SEGMENTATION METHODS FOR DIGITAL IMAGE ANALYSIS OF CONFOCAL MICROSCOPE IMAGES TO MEASURE TRACHEID CELL DIMENSIONS." IAWA Journal 22, no. 3 (2001): 267–88. http://dx.doi.org/10.1163/22941932-90000284.

Full text
Abstract:
Image analysis is a common tool for measuring tracheid cell dimensions. When analyzing a digital image of a transverse cross section of wood, one of the initial procedures is that of segmentation. This involves classifying a picture element (pixel) as either cell wall or lumen. The accuracy of tracheid measurements is dependent on how well the result of the segmentation procedure corresponds to the true distributions of cell wall or lumen pixels. In this paper a comparison of segmentation methods is given. The effect of segmentation method on measurements is investigated and the performance of each method is discussed.We demonstrate that automated segmentation methods remove observer bias and are thus capable of more reproducible results. The contrast for confocal microscope images is of such quality that one of the fastest and simplest automatic segmentation methods may be used.
APA, Harvard, Vancouver, ISO, and other styles
28

Yu, Zechen, Zhongping Chen, Yang Yu, Haichen Zhu, Dan Tong, and Yang Chen. "An automated ASPECTS method with atlas-based segmentation." Computer Methods and Programs in Biomedicine 210 (October 2021): 106376. http://dx.doi.org/10.1016/j.cmpb.2021.106376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Weiwei, Xing, Wang Weiqiang, Bao Peng, Sun Liya, and Tong Leiming. "A novel method for automated human behavior segmentation." Computer Animation and Virtual Worlds 27, no. 5 (April 12, 2016): 501–14. http://dx.doi.org/10.1002/cav.1690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wan, Guo Chun, Meng Meng Li, He Xu, Wen Hao Kang, Jin Wen Rui, and Mei Song Tong. "XFinger-Net: Pixel-Wise Segmentation Method for Partially Defective Fingerprint Based on Attention Gates and U-Net." Sensors 20, no. 16 (August 10, 2020): 4473. http://dx.doi.org/10.3390/s20164473.

Full text
Abstract:
Partially defective fingerprint image (PDFI) with poor performance poses challenges to the automated fingerprint identification system (AFIS). To improve the quality and the performance rate of PDFI, it is essential to use accurate segmentation. Currently, most fingerprint image segmentations use methods with ridge orientation, ridge frequency, coherence, variance, local gradient, etc. This paper proposes a method of XFinger-Net for segmenting PDFIs. Based on U-Net, XFinger-Net inherits its characteristics. The attention gate with fewer parameters is used to replace the cascaded network, which can suppress uncorrelated regions of PDFIs. Moreover, the XFinger-Net implements a pixel-level segmentation and takes non-blocking fingerprint images as an input to preserve the global characteristics of PDFIs. The XFinger-Net can achieve a very good segmentation effect as demonstrated in the self-made fingerprint segmentation test.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yuliang, Tongda Lu, Xiaolai Li, Shuai Ren, and Shusheng Bi. "Robust nanobubble and nanodroplet segmentation in atomic force microscope images using the spherical Hough transform." Beilstein Journal of Nanotechnology 8 (December 1, 2017): 2572–82. http://dx.doi.org/10.3762/bjnano.8.257.

Full text
Abstract:
Interfacial nanobubbles (NBs) and nanodroplets (NDs) have been attracting increasing attention due to their potential for numerous applications. As a result, the automated segmentation and morphological characterization of NBs and NDs in atomic force microscope (AFM) images is highly awaited. The current segmentation methods suffer from the uneven background in AFM images due to thermal drift and hysteresis of AFM scanners. In this study, a two-step approach was proposed to segment NBs and NDs in AFM images in an automated manner. The spherical Hough transform (SHT) and a boundary optimization operation were combined to achieve robust segmentation. The SHT was first used to preliminarily detect NBs and NDs. After that, the so-called contour expansion operation was applied to achieve optimized boundaries. The principle and the detailed procedure of the proposed method were presented, followed by the demonstration of the automated segmentation and morphological characterization. The result shows that the proposed method gives an improved segmentation result compared with the thresholding and circle Hough transform method. Moreover, the proposed method shows strong robustness of segmentation in AFM images with an uneven background.
APA, Harvard, Vancouver, ISO, and other styles
32

Mukondiwa, Daisy Thembelihle, YongTao Shi, and Chao Gao. "A Prostate Boundary Localization and Edge Denoising Algorithm." East African Journal of Information Technology 7, no. 1 (April 30, 2024): 108–20. http://dx.doi.org/10.37284/eajit.7.1.1900.

Full text
Abstract:
This research aimed at presenting a two-step method for prostate segmentation in TRUS images. The research used a prostate boundary localization and prostate edge denoising approach. The proposed method contribution is the use of the optimized Hodge’s method as the boundary operator and the use of the Bidirectional Exponential moving average to perform edge denoising. The results showed that the proposed method is effective in completing the prostate segmentation task. (1) The prostate region is effectively initialized and localized. (2) The recovery of noise points is accomplished and the segmentation result being consistent with the general shape of the prostate. The experimental results showed that this method can improve the overall segmentation accuracy. The process uses a combination of traditional and unsupervised methods, eliminating the need to rely on large data sets compared to current deep learning methods. The proposed method achieved excellent segmentation accuracy, with the Dice similarity coefficient (DICE) value of 0.9679, an average Intersection over Union (IoU) value of 0.9377, and an average False Positive Rate (FPR) of 0.0399. The results obtained from this study have significant implications for clinical practice. Accurate prostate segmentation is crucial for various applications, including radiation therapy planning, image-guided interventions, and computer-aided diagnosis. The proposed method has the potential to improve these applications by providing more precise and reliable prostate segmentations. However, it is important to acknowledge some limitations of this study. First, the proposed method was evaluated on a limited dataset, which may not fully represent the diversity of prostate images encountered in clinical practice. Further validation on larger datasets is necessary to assess its generalizability. Additionally, the proposed method relied on manual annotations for training, which can introduce inter-observer variability. Incorporating automated or semi-automated annotation techniques could enhance the robustness of the method
APA, Harvard, Vancouver, ISO, and other styles
33

Iglesias-Rey, Sara, Felipe Antunes-Santos, Cathleen Hagemann, David Gómez-Cabrero, Humberto Bustince, Rickie Patani, Andrea Serio, Bernard De Baets, and Carlos Lopez-Molina. "Unsupervised Cell Segmentation and Labelling in Neural Tissue Images." Applied Sciences 11, no. 9 (April 21, 2021): 3733. http://dx.doi.org/10.3390/app11093733.

Full text
Abstract:
Neurodegenerative diseases are a group of largely incurable disorders characterised by the progressive loss of neurons and for which often the molecular mechanisms are poorly understood. To bridge this gap, researchers employ a range of techniques. A very prominent and useful technique adopted across many different fields is imaging and the analysis of histopathological and fluorescent label tissue samples. Although image acquisition has been efficiently automated recently, automated analysis still presents a bottleneck. Although various methods have been developed to automate this task, they tend to make use of single-purpose machine learning models that require extensive training, imposing a significant workload on the experts and introducing variability in the analysis. Moreover, these methods are impractical to audit and adapt, as their internal parameters are difficult to interpret and change. Here, we present a novel unsupervised automated schema for object segmentation of images, exemplified on a dataset of tissue images. Our schema does not require training data, can be fully audited and is based on a series of understandable biological decisions. In order to evaluate and validate our schema, we compared it with a state-of-the-art automated segmentation method for post-mortem tissues of ALS patients.
APA, Harvard, Vancouver, ISO, and other styles
34

Arafati, Arghavan, Daisuke Morisawa, Michael R. Avendi, M. Reza Amini, Ramin A. Assadi, Hamid Jafarkhani, and Arash Kheradvar. "Generalizable fully automated multi-label segmentation of four-chamber view echocardiograms based on deep convolutional adversarial networks." Journal of The Royal Society Interface 17, no. 169 (August 2020): 20200267. http://dx.doi.org/10.1098/rsif.2020.0267.

Full text
Abstract:
A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers’ reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.
APA, Harvard, Vancouver, ISO, and other styles
35

Kazerooni, Anahita Fathi, Nastaran Khalili, Debanjan Haldar, Karthik Viswanathan, Ariana Familiar, Sina Bagheri, Hannah Anderson, et al. "IMG-05. A MULTI-INSTITUTIONAL AND MULTI-HISTOLOGY PEDIATRIC-SPECIFIC BRAIN TUMOR SUBREGION SEGMENTATION TOOL: FACILITATING RAPNO-BASED ASSESSMENT OF TREATMENT RESPONSE." Neuro-Oncology 25, Supplement_1 (June 1, 2023): i47. http://dx.doi.org/10.1093/neuonc/noad073.182.

Full text
Abstract:
Abstract Current response assessment in pediatric brain tumors (PBTs), as recommended by the Response Assessment in Pediatric Neuro-Oncology (RAPNO) working group, relies on 2D measurements of changes in tumor size. However, there is growing evidence of underestimation of tumor size in PBTs using 2D compared to volumetric (3D) measurement approach. Accordingly, automated methods that reduce manual burden and intra- and inter-rater variability in segmenting tumor subregions and volumetric evaluations are warranted to facilitate tumor response assessment of PBTs. We have developed a fully automatic deep learning (DL) model using the nnUNet architecture on a large cohort of multi-institutional and multi-histology PBTs. The model was trained on widely available standard multiparametric MRI sequences (T1-pre, T1-post, T2, T2-FLAIR) for segmentation of the whole tumor and RAPNO-recommended subregions, including enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). As a prerequisite step for accurate tumor segmentation, we also generated another DL model based on DeepMedic for brain extraction from mpMRIs. The models were trained on an institutional cohort of 151 subjects and independently tested on 64 subjects from the internal and 29 patients from external institutions. The trained models showed excellent performance with median Dice scores of 0.98±0.02/0.97±0.02 for brain tissue segmentation, 0.92±0.08/0.90±0.17 for whole tumor segmentation, 0.76±0.31/0.87±0.29 for ET subregion, and 0.82±0.15/0.80±0.28 for segmentation of non-enhancing components (combination of NET, CC, and ED) in internal/external test sets, respectively. The automated segmentation demonstrated strong agreement with expert segmentations in volumetric measurement of tumor components, with Pearson’s correlation coefficients of 0.97, 0.97, 0.99, and 0.79 (p&lt;0.0001) for ET, NET, CC, and ED regions, respectively. Our proposed multi-institutional and multi-histology automated segmentation method has the potential to aid clinical neuro-oncology practice by providing reliable and reproducible volumetric measurements for treatment response assessment.
APA, Harvard, Vancouver, ISO, and other styles
36

Agnihotri, Aditya. "An Efficient and Clinical-Oriented 3D Liver Segmentation Method." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 2 (September 10, 2019): 1015–21. http://dx.doi.org/10.17762/turcomat.v10i2.13584.

Full text
Abstract:
Due to the vast variety of human differences in the morphologies of the liver and the variance in pixel intensity in the picture, automatic liver segmentation is challenging. Furthermore, the limits of the liver are unclear since it shares intensity distributions with neighbouring organs and tissues. We suggest a quick and accurate approach for segmenting the liver using contrast-enhanced computed tomography (CT) images in this methodology. We apply level-set speed photos to adopt the two-step seeded region growth (SRG) method to generate an initial liver boundary that is roughly defined. According to the gradient information and related component connectivity, this separates a CT picture into a collection of distinct objects. Our technique reduces computing time by reducing threshold propagation, which converges at the best segmentation result, with such an optimal estimation of the initial liver border. The limits of liver computer supported detection/diagnosis systems and potential strategies to enhance them will also be covered. We came to the conclusion in this research that, despite some potential for advancement, automatic liver segmentation approaches are now on level with human segmentation. However, it may be said that both automated and semi-automatic liver tumour segmentation approaches perform less well than anticipated. It is also clear that the majority of computer assisted detection/diagnosis techniques call for manual liver and liver tumour segmentation, which restricts the clinical use of these systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Qiu, Bingjiang, Jiapan Guo, Joep Kraeima, Haye Hendrik Glas, Weichuan Zhang, Ronald J. H. Borra, Max Johannes Hendrikus Witjes, and Peter M. A. van Ooijen. "Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography." Journal of Personalized Medicine 11, no. 6 (May 31, 2021): 492. http://dx.doi.org/10.3390/jpm11060492.

Full text
Abstract:
Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.
APA, Harvard, Vancouver, ISO, and other styles
38

Kumar, S. Pramod, and Mrityunjaya V. Latte. "Fully Automated Segmentation of Lung Parenchyma Using Break and Repair Strategy." Journal of Intelligent Systems 28, no. 2 (April 24, 2019): 275–89. http://dx.doi.org/10.1515/jisys-2017-0020.

Full text
Abstract:
Abstract The traditional segmentation methods available for pulmonary parenchyma are not accurate because most of the methods exclude nodules or tumors adhering to the lung pleural wall as fat. In this paper, several techniques are exhaustively used in different phases, including two-dimensional (2D) optimal threshold selection and 2D reconstruction for lung parenchyma segmentation. Then, lung parenchyma boundaries are repaired using improved chain code and Bresenham pixel interconnection. The proposed method of segmentation and repairing is fully automated. Here, 21 thoracic computer tomography slices having juxtapleural nodules and 115 lung parenchyma scans are used to verify the robustness and accuracy of the proposed method. Results are compared with the most cited active contour methods. Empirical results show that the proposed fully automated method for segmenting lung parenchyma is more accurate. The proposed method is 100% sensitive to the inclusion of nodules/tumors adhering to the lung pleural wall, the juxtapleural nodule segmentation is >98%, and the lung parenchyma segmentation accuracy is >96%.
APA, Harvard, Vancouver, ISO, and other styles
39

Arsenescu, Tudor, Radu Chifor, Tiberiu Marita, Andrei Santoma, Andrei Lebovici, Daniel Duma, Vitalie Vacaras, and Alexandru Florin Badea. "3D Ultrasound Reconstructions of the Carotid Artery and Thyroid Gland Using Artificial-Intelligence-Based Automatic Segmentation—Qualitative and Quantitative Evaluation of the Segmentation Results via Comparison with CT Angiography." Sensors 23, no. 5 (March 3, 2023): 2806. http://dx.doi.org/10.3390/s23052806.

Full text
Abstract:
The aim of this study was to evaluate the feasibility of a noninvasive and low-operator-dependent imaging method for carotid-artery-stenosis diagnosis. A previously developed prototype for 3D ultrasound scans based on a standard ultrasound machine and a pose reading sensor was used for this study. Working in a 3D space and processing data using automatic segmentation lowers operator dependency. Additionally, ultrasound imaging is a noninvasive diagnosis method. Artificial intelligence (AI)-based automatic segmentation of the acquired data was performed for the reconstruction and visualization of the scanned area: the carotid artery wall, the carotid artery circulated lumen, soft plaque, and calcified plaque. A qualitative evaluation was conducted via comparing the US reconstruction results with the CT angiographies of healthy and carotid-artery-disease patients. The overall scores for the automated segmentation using the MultiResUNet model for all segmented classes in our study were 0.80 for the IoU and 0.94 for the Dice. The present study demonstrated the potential of the MultiResUNet-based model for 2D-ultrasound-image automated segmentation for atherosclerosis diagnosis purposes. Using 3D ultrasound reconstructions may help operators achieve better spatial orientation and evaluation of segmentation results.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Junjie, Qian Su, Yunbin Niu, Zongyu Zhang, and Jinghao Liu. "A Handheld LiDAR-Based Semantic Automatic Segmentation Method for Complex Railroad Line Model Reconstruction." Remote Sensing 15, no. 18 (September 13, 2023): 4504. http://dx.doi.org/10.3390/rs15184504.

Full text
Abstract:
To ensure efficient railroad operation and maintenance management, the accurate reconstruction of railroad BIM models is a crucial step. This paper proposes a workflow for automated segmentation and reconstruction of railroad structures using point cloud data, without relying on intensity or trajectory information. The workflow consists of four main components: point cloud adaptive denoising, scene segmentation, structure segmentation combined with deep learning, and model reconstruction. The proposed workflow was validated using two datasets with significant differences in railroad line point cloud data. The results demonstrated significant improvements in both efficiency and accuracy compared to existing methods. The techniques enable direct automated processing from raw data to segmentation results, providing data support for parameterized modeling and greatly reducing manual processing time. The proposed algorithms achieved an intersection over union (IoU) of over 0.9 for various structures in a 450-m-long railroad line. Furthermore, for single-track railroads, the automated segmentation time was within 1 min per kilometer, with an average mean intersection over union (MIoU) and accuracy of 0.9518 and 1.0000, respectively.
APA, Harvard, Vancouver, ISO, and other styles
41

P, Mathumetha, Sivakumar Rajagopal, Shailly Vaidya, and Basim Alhadidi. "Automated Detection of Pneumothorax Using Frontal Chest X-rays." ECS Transactions 107, no. 1 (April 24, 2022): 861–72. http://dx.doi.org/10.1149/10701.0861ecst.

Full text
Abstract:
Pneumothorax detection can assist doctors in treatment and diagnosis of diseases more accurately. In this paper automated method employed image segmentation techniques for the detection. The preprocessing methods are handled by image processing techniques using MATLAB 2020b software Support Vector machine is applied to classify normal and abnormal lung chest X-ray. Features are extracted from lung image with the texture based segmentation techniques. The rib boundaries are identified with sobel edge detection. The gray level incurrence matrices segmentation method increase accuracy rate with AUC 0.91.
APA, Harvard, Vancouver, ISO, and other styles
42

Mihelic, Samuel A., William A. Sikora, Ahmed M. Hassan, Michael R. Williamson, Theresa A. Jones, and Andrew K. Dunn. "Segmentation-Less, Automated, Vascular Vectorization." PLOS Computational Biology 17, no. 10 (October 8, 2021): e1009451. http://dx.doi.org/10.1371/journal.pcbi.1009451.

Full text
Abstract:
Recent advances in two-photon fluorescence microscopy (2PM) have allowed large scale imaging and analysis of blood vessel networks in living mice. However, extracting network graphs and vector representations for the dense capillary bed remains a bottleneck in many applications. Vascular vectorization is algorithmically difficult because blood vessels have many shapes and sizes, the samples are often unevenly illuminated, and large image volumes are required to achieve good statistical power. State-of-the-art, three-dimensional, vascular vectorization approaches often require a segmented (binary) image, relying on manual or supervised-machine annotation. Therefore, voxel-by-voxel image segmentation is biased by the human annotator or trainer. Furthermore, segmented images oftentimes require remedial morphological filtering before skeletonization or vectorization. To address these limitations, we present a vectorization method to extract vascular objects directly from unsegmented images without the need for machine learning or training. The Segmentation-Less, Automated, Vascular Vectorization (SLAVV) source code in MATLAB is openly available on GitHub. This novel method uses simple models of vascular anatomy, efficient linear filtering, and vector extraction algorithms to remove the image segmentation requirement, replacing it with manual or automated vector classification. Semi-automated SLAVV is demonstrated on three in vivo 2PM image volumes of microvascular networks (capillaries, arterioles and venules) in the mouse cortex. Vectorization performance is proven robust to the choice of plasma- or endothelial-labeled contrast, and processing costs are shown to scale with input image volume. Fully-automated SLAVV performance is evaluated on simulated 2PM images of varying quality all based on the large (1.4×0.9×0.6 mm3 and 1.6×108 voxel) input image. Vascular statistics of interest (e.g. volume fraction, surface area density) calculated from automatically vectorized images show greater robustness to image quality than those calculated from intensity-thresholded images.
APA, Harvard, Vancouver, ISO, and other styles
43

Dury, Richard, Rob Dineen, Anbarasu Lourdusamy, and Richard Grundy. "Semi-automated medulloblastoma segmentation and influence of molecular subgroup on segmentation quality." Neuro-Oncology 21, Supplement_4 (October 2019): iv14. http://dx.doi.org/10.1093/neuonc/noz167.060.

Full text
Abstract:
Abstract Medulloblastoma is the most common malignant brain tumour in children. Segmenting the tumour itself from the surrounding tissue on MRI scans has shown to be useful for neuro-surgical planning, by allowing a better understanding of the tumour margin with 3D visualisation. However, manual segmentation of medulloblastoma is time consuming, prone to bias and inter-observer discrepancies. Here we propose a semi-automatic patient based segmentation pipeline with little sensitivity to tumour location and minimal user input. Using SPM12 “Segment” as a base, an additional tissue component describing the medulloblastoma is included in the algorithm. The user is required to define the centre of mass and a single surface point of the tumour, creating an approximate enclosing sphere. The calculated volume is confined to the cerebellum to minimise misclassification of other intracranial structures. This process typically takes 5 minutes from start to finish. This method was applied to 97 T2-weighted scans of paediatric medulloblastoma (7 WNT, 6 SHH, 17 Gr3, 26 Gr4, 41 unknown subtype); resulting segmented volumes were compared to manual segmentations. An average Dice coefficient of 0.85±0.07 was found, with the Group 4 subtype demonstrating a significantly higher similarity with manual segmentation than other subgroups (0.88±0.04). When visually assessing the 10 cases with the lowest Dice coefficients, it was found that the misclassification of oedema was the most common source of error. As this method is independent of image contrast, segmentation could be improved by applying it to images that are less sensitive to oedema, such as T1.
APA, Harvard, Vancouver, ISO, and other styles
44

Ye, Yaojiang, Zixin Luo, Zhengxuan Qiu, Kangyang Cao, Bingsheng Huang, Lei Deng, Weijing Zhang, et al. "Radiomics Prediction of Muscle Invasion in Bladder Cancer Using Semi-Automatic Lesion Segmentation of MRI Compared with Manual Segmentation." Bioengineering 10, no. 12 (November 25, 2023): 1355. http://dx.doi.org/10.3390/bioengineering10121355.

Full text
Abstract:
Conventional radiomics analysis requires the manual segmentation of lesions, which is time-consuming and subjective. This study aimed to assess the feasibility of predicting muscle invasion in bladder cancer (BCa) with radiomics using a semi-automatic lesion segmentation method on T2-weighted images. Cases of non-muscle-invasive BCa (NMIBC) and muscle-invasive BCa (MIBC) were pathologically identified in a training cohort and in internal and external validation cohorts. For bladder tumor segmentation, a deep learning-based semi-automatic model was constructed, while manual segmentation was performed by a radiologist. Semi-automatic and manual segmentation results were respectively used in radiomics analyses to distinguish NMIBC from MIBC. An equivalence test was used to compare the models’ performance. The mean Dice similarity coefficients of the semi-automatic segmentation method were 0.836 and 0.801 in the internal and external validation cohorts, respectively. The area under the receiver operating characteristic curve (AUC) were 1.00 (0.991) and 0.892 (0.894) for the semi-automated model (manual) on the internal and external validation cohort, respectively (both p < 0.05). The average total processing time for semi-automatic segmentation was significantly shorter than that for manual segmentation (35 s vs. 92 s, p < 0.001). The BCa radiomics model based on semi-automatic segmentation method had a similar diagnostic performance as that of manual segmentation, while being less time-consuming and requiring fewer manual interventions.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Yen Sheng, Chung Hua Chen, Yuh Ming Chang, and Chun Chih Chang. "Comparing Level Set Method and Canny Algorithm for Edge Detection to Tongue Diagnosis in Traditional Chinese Medicine." Applied Mechanics and Materials 236-237 (November 2012): 783–86. http://dx.doi.org/10.4028/www.scientific.net/amm.236-237.783.

Full text
Abstract:
The tongue diagnosis is an important diagnostic method in Traditional Chinese Medicine (TCM). Human tongue is one of the im¬portant organs which contain the information of health status. In order to achieve an automatic tongue diagnostic system, an effective segmentation me¬thod for detecting the edge of tongue is very important. We mainly compare the Level Set Method and Canny algorithm for edge segmentation. The segmentation using Canny algorithm may produce many false edges after cutting; thus, it is not suitable for use. But, Level Set Method can produce better edge contour. Therefore, it may be useful in clinical automated tongue diagnosis system. Experiments show the results of these techniques.
APA, Harvard, Vancouver, ISO, and other styles
46

Meiburger, Kristen M., Massimo Salvi, Giulia Rotunno, Wolfgang Drexler, and Mengyang Liu. "Automatic Segmentation and Classification Methods Using Optical Coherence Tomography Angiography (OCTA): A Review and Handbook." Applied Sciences 11, no. 20 (October 18, 2021): 9734. http://dx.doi.org/10.3390/app11209734.

Full text
Abstract:
Optical coherence tomography angiography (OCTA) is a promising technology for the non-invasive imaging of vasculature. Many studies in literature present automated algorithms to quantify OCTA images, but there is a lack of a review on the most common methods and their comparison considering multiple clinical applications (e.g., ophthalmology and dermatology). Here, we aim to provide readers with a useful review and handbook for automatic segmentation and classification methods using OCTA images, presenting a comparison of techniques found in the literature based on the adopted segmentation or classification method and on the clinical application. Another goal of this study is to provide insight into the direction of research in automated OCTA image analysis, especially in the current era of deep learning.
APA, Harvard, Vancouver, ISO, and other styles
47

Ye, Guochang, and Mehmet Kaya. "Automated Cell Foreground–Background Segmentation with Phase-Contrast Microscopy Images: An Alternative to Machine Learning Segmentation Methods with Small-Scale Data." Bioengineering 9, no. 2 (February 18, 2022): 81. http://dx.doi.org/10.3390/bioengineering9020081.

Full text
Abstract:
Cell segmentation is a critical step for image-based experimental analysis. Existing cell segmentation methods are neither entirely automated nor perform well under basic laboratory microscopy. This study proposes an efficient and automated cell segmentation method involving morphological operations to automatically achieve cell segmentation for phase-contrast microscopes. Manual/visual counting of cell segmentation serves as the control group (156 images as ground truth) to evaluate the proposed method’s performance. The proposed technology’s adaptive performance is assessed at varying conditions, including artificial blurriness, illumination, and image size. Compared to the Trainable Weka Segmentation method, the Empirical Gradient Threshold method, and the ilastik segmentation software, the proposed method achieved better segmentation accuracy (dice coefficient: 90.07, IoU: 82.16%, and 6.51% as the average relative error on measuring cell area). The proposed method also has good reliability, even under unfavored imaging conditions at which manual labeling or human intervention is inefficient. Additionally, similar degrees of segmentation accuracy were confirmed when the ground truth data and the generated data from the proposed method were applied individually to train modified U-Net models (16848 images). These results demonstrated good accuracy and high practicality of the proposed cell segmentation method with phase-contrast microscopy image data.
APA, Harvard, Vancouver, ISO, and other styles
48

Abdullah, Bassem A., Akmal A. Younis, and Nigel M. John. "Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs." Open Biomedical Engineering Journal 6, no. 1 (May 9, 2012): 56–72. http://dx.doi.org/10.2174/1874120701206010056.

Full text
Abstract:
In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI.
APA, Harvard, Vancouver, ISO, and other styles
49

Paing, May Phu, Supan Tungjitkusolmun, Toan Huy Bui, Sarinporn Visitsattapongse, and Chuchart Pintavirooj. "Automated Segmentation of Infarct Lesions in T1-Weighted MRI Scans Using Variational Mode Decomposition and Deep Learning." Sensors 21, no. 6 (March 10, 2021): 1952. http://dx.doi.org/10.3390/s21061952.

Full text
Abstract:
Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.
APA, Harvard, Vancouver, ISO, and other styles
50

Guo, Fan, Xin Zhao, Beiji Zou, and Yixiong Liang. "Automatic Retinal Image Registration Using Blood Vessel Segmentation and SIFT Feature." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 11 (April 11, 2017): 1757006. http://dx.doi.org/10.1142/s0218001417570063.

Full text
Abstract:
Automatic retinal image registration is still a great challenge in computer aided diagnosis and screening system. In this paper, a new retinal image registration method is proposed based on the combination of blood vessel segmentation and scale invariant feature transform (SIFT) feature. The algorithm includes two stages: retinal image segmentation and registration. In the segmentation stage, the blood vessel is segmented by using the guided filter to enhance the vessel structure and the bottom-hat transformation to extract blood vessel. In the registration stage, the SIFT algorithm is adopted to detect the feature of vessel segmentation image, complemented by using a random sample consensus (RANSAC) algorithm to eliminate incorrect matches. We evaluate our method from both segmentation and registration aspects. For segmentation evaluation, we test our method on DRIVE database, which provides manually labeled images from two specialists. The experimental results show that our method achieves 0.9562 in accuracy (Acc), which presents competitive performance compare to other existing segmentation methods. For registration evaluation, we test our method on STARE database, and the experimental results demonstrate the superior performance of the proposed method, which makes the algorithm a suitable tool for automated retinal image analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography