Journal articles on the topic 'Segmentation accuracy'

To see the other types of publications on this topic, follow the link: Segmentation accuracy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Segmentation accuracy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Jin Xi, Hong Zhi Yu, Ning Ma, and Zhao Yao Li. "The Phoneme Automatic Segmentation Algorithms Study of Tibetan Lhasa Words Continuous Speech Stream." Advanced Materials Research 765-767 (September 2013): 2051–54. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2051.

Full text
Abstract:
In this paper, we adopt two methods to voice phoneme segmentation when building Tibetan corpus: One is the traditional artificial segmentation method, one is the automatic segmentation method based on the Mono prime HMM model. And experiments are performed to analyze the accuracy of both methods of segmentations. The results showed: Automatic segmentation method based tone prime HMM model helps to shorten the cycle of building Tibetan corpus, especially in building a large corpus segmentation and labeling a lot of time and manpower cost savings, and have greatly improved the accuracy and consistency of speech corpus annotation information.
APA, Harvard, Vancouver, ISO, and other styles
2

Akcay, Ozgun, Emin Avsar, Melis Inalpulat, Levent Genc, and Ahmet Cam. "Assessment of Segmentation Parameters for Object-Based Land Cover Classification Using Color-Infrared Imagery." ISPRS International Journal of Geo-Information 7, no. 11 (October 31, 2018): 424. http://dx.doi.org/10.3390/ijgi7110424.

Full text
Abstract:
Using object-based image analysis (OBIA) techniques for land use-land cover classification (LULC) has become an area of interest due to the availability of high-resolution data and segmentation methods. Multi-resolution segmentation in particular, statistically seen as the most used algorithm, is able to produce non-identical segmentations depending on the required parameters. The total effect of segmentation parameters on the classification accuracy of high-resolution imagery is still an open question, though some studies were implemented to define the optimum segmentation parameters. However, recent studies have not properly considered the parameters and their consequences on LULC accuracy. The main objective of this study is to assess OBIA segmentation and classification accuracy according to the segmentation parameters using different overlap ratios during image object sampling for a predetermined scale. With this aim, we analyzed and compared (a) high-resolution color-infrared aerial images of a newly-developed urban area including different land use types; (b) combinations of multi-resolution segmentation with different shape, color, compactness, bands, and band-weights; and (c) accuracies of classifications based on varied segmentations. The results of various parameters in the study showed an explicit correlation between segmentation accuracies and classification accuracies. The effect of changes in segmentation parameters using different sample selection methods for five main LULC types was studied. Specifically, moderate shape and compactness values provided more consistency than lower and higher values; also, band weighting demonstrated substantial results due to the chosen bands. Differences in the variable importance of the classifications and changes in LULC maps were also explained.
APA, Harvard, Vancouver, ISO, and other styles
3

Van den Broeck, Joyce, Evie Vereecke, Roel Wirix-Speetjens, and Jos Vander Sloten. "Segmentation accuracy of long bones." Medical Engineering & Physics 36, no. 7 (July 2014): 949–53. http://dx.doi.org/10.1016/j.medengphy.2014.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schmidt-Richberg, A., J. Fiehler, T. Illies, D. Möller, H. Handels, D. Säring, and N. D. Forkert. "Automatic Correction of Gaps in Cerebrovascular Segmentations Extracted from 3D Time-of-Flight MRA Datasets." Methods of Information in Medicine 51, no. 05 (2012): 415–22. http://dx.doi.org/10.3414/me11-02-0037.

Full text
Abstract:
Summary Objectives: Exact cerebrovascular segmentations are required for several applications in today’s clinical routine. A major drawback of typical automatic segmentation methods is the occurrence of gaps within the segmentation. These gaps are typically located at small vessel structures exhibiting low intensities. Manual correction is very time-consuming and not suitable in clinical practice. This work presents a post-processing method for the automatic detection and closing of gaps in cerebrovascular segmentations. Methods: In this approach, the 3D centerline is calculated from an available vessel segmentation, which enables the detection of corresponding vessel endpoints. These endpoints are then used to detect possible connections to other 3D centerline voxels with a graph-based approach. After consistency check, reasonable detected paths are expanded to the vessel boundaries using a level set approach and combined with the initial segmentation. Results: For evaluation purposes, 100 gaps were artificially inserted at non-branching vessels and bifurcations in manual cerebrovascular segmentations derived from ten Time-of-Flight magnetic resonance angiography datasets. The results show that the presented method is capable of detecting 82% of the non-branching vessel gaps and 84% of the bifurcation gaps. The level set segmentation expands the detected connections with 0.42 mm accuracy compared to the initial segmentations. A further evaluation based on 10 real automatic segmentations from the same datasets shows that the proposed method detects 35 additional connections in average per dataset, whereas 92.7% were rated as correct by a medical expert. Conclusion: The presented approach can considerably improve the accuracy of cerebrovascular segmentations and of following analysis outcomes.
APA, Harvard, Vancouver, ISO, and other styles
5

Rossi, Farli. "APPLICATION OF A SEMI-AUTOMATED TECHNIQUE IN LUNG LESION SEGMENTATION." Jurnal Teknoinfo 15, no. 1 (January 15, 2021): 56. http://dx.doi.org/10.33365/jti.v15i1.945.

Full text
Abstract:
Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this study, we apply a semi-automated technique that combines an active contour and low-level processing techniques in lung lesion segmentation by extracting lung lesions from thoracic Positron Emission Tomography (PET)/Computed Tomography (CT) images. The lesions were first segmented in Positron Emission Tomography (PET) images which have been converted previously to Standardised Uptake Values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To measure accuracy, the Jaccard Index (JI) was used. Jaccard Index (JI) was calculated by comparing the segmented lesion to alternative segmentations obtained from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results showed that the semi-automated technique (combination techniques between an active contour and low-level processing) in lung lesion segmentation has moderate accuracy with an average JI value of 0.76±0.12.
APA, Harvard, Vancouver, ISO, and other styles
6

Ferrante, Matteo, Lisa Rinaldi, Francesca Botta, Xiaobin Hu, Andreas Dolp, Marta Minotti, Francesca De Piano, et al. "Application of nnU-Net for Automatic Segmentation of Lung Lesions on CT Images and Its Implication for Radiomic Models." Journal of Clinical Medicine 11, no. 24 (December 9, 2022): 7334. http://dx.doi.org/10.3390/jcm11247334.

Full text
Abstract:
Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models’ accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Zi, Mingli Chen, Mahdieh Kazemimoghadam, Lin Ma, Strahinja Stojadinovic, Robert Timmerman, Tu Dan, Zabi Wardak, Weiguo Lu, and Xuejun Gu. "Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation." Physics in Medicine & Biology 67, no. 2 (January 19, 2022): 025004. http://dx.doi.org/10.1088/1361-6560/ac4667.

Full text
Abstract:
Abstract Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
APA, Harvard, Vancouver, ISO, and other styles
8

Vania, Malinda, Dawit Mureja, and Deukhee Lee. "Automatic spine segmentation from CT images using Convolutional Neural Network via redundant generation of class labels." Journal of Computational Design and Engineering 6, no. 2 (February 13, 2019): 224–32. http://dx.doi.org/10.1016/j.jcde.2018.05.002.

Full text
Abstract:
Abstract There has been a significant increase from 2010 to 2016 in the number of people suffering from spine problems. The automatic image segmentation of the spine obtained from a computed tomography (CT) image is important for diagnosing spine conditions and for performing surgery with computer-assisted surgery systems. The spine has a complex anatomy that consists of 33 vertebrae, 23 intervertebral disks, the spinal cord, and connecting ribs. As a result, the spinal surgeon is faced with the challenge of needing a robust algorithm to segment and create a model of the spine. In this study, we developed a fully automatic segmentation method to segment the spine from CT images, and we compared our segmentation results with reference segmentations obtained by well-known methods. We use a hybrid method. This method combines the convolutional neural network (CNN) and fully convolutional network (FCN), and utilizes class redundancy as a soft constraint to greatly improve the segmentation results. The proposed method was found to significantly enhance the accuracy of the segmentation results and the system processing time. Our comparison was based on 12 measurements: the Dice coefficient (94%), Jaccard index (93%), volumetric similarity (96%), sensitivity (97%), specificity (99%), precision (over segmentation 8.3 and under segmentation 2.6), accuracy (99%), Matthews correlation coefficient (0.93), mean surface distance (0.16 mm), Hausdorff distance (7.4 mm), and global consistency error (0.02). We experimented with CT images from 32 patients, and the experimental results demonstrated the efficiency of the proposed method. Highlights A method to enhance the accuracy of spine segmentation from CT data was proposed. The proposed method uses Convolutional Neural Network via redundant generation of class labels. Experiments show the segmentation accuracy has been enhanced.
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Yun Tao, and Yi Bing Zhou. "Segmentations of Liver and Hepatic Tumors from 3D Computed Tomography Abdominal Images." Advanced Materials Research 898 (February 2014): 684–87. http://dx.doi.org/10.4028/www.scientific.net/amr.898.684.

Full text
Abstract:
The segmentation of liver using computed tomography (CT) data has gained a lot of importance in the medical image processing field. In this paper, we present a survey on liver segmentation methods and techniques using CT images for liver segmentation. An adaptive initialization method was developed to produce fully automatic processing frameworks based on graph-cut and gradient flow active contour algorithms. This method was applied to abdominal Computed Tomography (CT) images for segmentation of liver tissue and hepatic tumors. Twenty-five anonymized datasets were randomly collected from several radiology centres without specific request on acquisition parameter settings nor patient clinical situation as inclusion criteria. Resulting automatic segmentations of liver tissue and tumors were compared to their reference standard delineations manually performed by a specialist. Segmentation accuracy has been assessed through the following evaluation framework: dice similarity coefficient, false negative ratio, false positive ratio and processing time. The implemented initialization method allows fully automatic segmentation leading to superior overall performances of graph-cut algorithm in terms of accuracy and processing time. The initialization method here presented resulted suitable and reliable for two different segmentation techniques and could be further extended.
APA, Harvard, Vancouver, ISO, and other styles
10

Lacerda, M. G., E. H. Shiguemori, A. J. Damião, C. S. Anjos, and M. Habermann. "IMPACT OF SEGMENTATION PARAMETERS ON THE CLASSIFICATION OF VHR IMAGES ACQUIRED BY RPAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (November 4, 2020): 43–48. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-43-2020.

Full text
Abstract:
Abstract. RPAs (Remotely Piloted Aircrafts) have been used in many Remote Sensing applications, featuring high-quality imaging sensors. In some situations, the images are interpreted in an automated fashion using object-oriented classification. In this case, the first step is segmentation. However, the setting of segmentation parameters such as scale, shape, and compactness may yield too many different segmentations, thus it is necessary to understand the influence of those parameters on the final output. This paper compares 24 segmentation parameter sets by taking into account classification scores. The results indicate that the segmentation parameters exert influence on both classification accuracy and processing time.
APA, Harvard, Vancouver, ISO, and other styles
11

Lin, Xin, and RongChun Sun. "Refinement Bilateral Segmentation Network for Semantic Segmentation in Traffic Scenes." Journal of Physics: Conference Series 2400, no. 1 (December 1, 2022): 012014. http://dx.doi.org/10.1088/1742-6596/2400/1/012014.

Full text
Abstract:
Abstract Semantic segmentation technology in traffic scenes can help vehicles make accurate analyses and positioning of the road ahead. In traffic scenarios, the trade-off between real-time performance and the accuracy of semantic segmentation is particularly important. This paper proposes a lightweight deep convolutional network, which can be applied to traffic scenes to complete accurate semantic segmentation tasks, considering both real-time performance and accuracy. The Cross Channel Attention Fusion Mechanism proposed in this paper can better integrate the context information and improve accuracy. The Depth-wise Separable Pyramid Module proposed based on the feature pyramid idea can improve the segmentation accuracy and effectively trade off the real-time performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Jiang, Qunyan, Juying Dai, Ting Rui, Faming Shao, Ruizhe Hu, Yinan Du, and Heng Zhang. "Detail Guided Multilateral Segmentation Network for Real-Time Semantic Segmentation." Applied Sciences 12, no. 21 (October 31, 2022): 11040. http://dx.doi.org/10.3390/app122111040.

Full text
Abstract:
With the development of unmanned vehicles and other technologies, the technical demand for scene semantic segmentation is more and more intense. Semantic segmentation requires not only rich high-level semantic information, but also rich detail information to ensure the accuracy of the segmentation task. Using a multipath structure to process underlying and semantic information can improve efficiency while ensuring segmentation accuracy. In order to improve the segmentation accuracy and efficiency of some small and thin objects, a detail guided multilateral segmentation network is proposed. Firstly, in order to improve the segmentation accuracy and model efficiency, a trilateral parallel network structure is designed, including the context fusion path (CF-path), the detail information guidance path (DIG-path), and the semantic information supplement path (SIS-path). Secondly, in order to effectively fuse semantic information and detail information, a feature fusion module based on an attention mechanism is designed. Finally, experimental results on CamVid and Cityscapes datasets show that the proposed algorithm can effectively balance segmentation accuracy and inference speed.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Yuan, Fu Cang Jia, Xiao Dong Zhang, Cheng Huang, and Huo Ling Luo. "Local Patch Similarity Ranked Voxelwise STAPLE on Magnetic Resonance Image Hippocampus Segmentation." Applied Mechanics and Materials 333-335 (July 2013): 1065–70. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.1065.

Full text
Abstract:
The segmentation and labeling of sub-cortical structures of interest are important tasks for the assessment of morphometric features in quantitative magnetic resonance (MR) image analysis. Recently, multi-atlas segmentation methods with statistical fusion strategy have demonstrated high accuracy in hippocampus segmentation. While, most of the segmentations rarely consider spatially variant model and reserve all segmentations. In this study, we propose a novel local patch-based and ranking strategy for voxelwise atlas selection to extend the original Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. The local ranking strategy is based on the metric of normalized cross correlation (NCC). Unlike its predecessors, this method estimates the fusion of each voxel patch-by-patch and makes use of gray image features as a prior. Validation results on 33 pairs of hippocampus MR images show good performance on the segmentation of hippocampus.
APA, Harvard, Vancouver, ISO, and other styles
14

Beasley, Ryan A. "Semiautonomous Medical Image Segmentation Using Seeded Cellular Automaton Plus Edge Detector." ISRN Signal Processing 2012 (May 17, 2012): 1–9. http://dx.doi.org/10.5402/2012/914232.

Full text
Abstract:
Segmentations of medical images are required in a number of medical applications such as quantitative analyses and patient-specific orthotics, yet accurate segmentation without significant user attention remains a challenge. This work presents a novel segmentation algorithm combining the region-growing Seeded Cellular Automata with a boundary term based on an edge-detected image. Both single processor and parallel processor implementations are developed and the algorithm is shown to be suitable for quick segmentations (2.2 s for voxel brain MRI) and interactive supervision (2–220 Hz). Furthermore, a method is described for generating appropriate edge-detected images without requiring additional user attention. Experiments demonstrate higher segmentation accuracy for the proposed algorithm compared with both Graphcut and Seeded Cellular Automata, particularly when provided minimal user attention.
APA, Harvard, Vancouver, ISO, and other styles
15

Shen, Lingli. "Implementation of CT Image Segmentation Based on an Image Segmentation Algorithm." Applied Bionics and Biomechanics 2022 (October 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/2047537.

Full text
Abstract:
With the increasingly important role of image segmentation in the field of computed tomography (CT) image segmentation, the requirements for image segmentation technology in related industries are constantly improving. When the hardware resources can fully meet the needs of the fast and high-precision image segmentation program system, the main means of how to improve the image segmentation effect is to improve the related algorithms. Therefore, this study has proposed a combination of genetic algorithm (GA) and Great Law (OTSU) algorithm to form an image segmentation algorithm-immune genetic algorithm (IGA) algorithm. The algorithm has improved the segmentation accuracy and efficiency of the original algorithm, which is beneficial to the more accurate results of CT image segmentation. The experimental results in this study have shown that the operating efficiency of the OTSU segmentation algorithm is up to 75%. The operating efficiency of the GA algorithm is up to 78%. The operating efficiency of the IGA algorithm is up to 92%. In terms of operating efficiency, the OTSU segmentation algorithm has more advantages. In terms of segmentation accuracy, the highest accuracy rate of OTSU segmentation algorithm is 45%. The accuracy of the GA algorithm is 80%. The highest accuracy of the IGA algorithm is 97%. The IGA algorithm is more powerful in terms of operating efficiency and accuracy. Therefore, the application of the IGA algorithm to CT image segmentation is beneficial to doctors to better judge the lesions and improve the diagnosis rate.
APA, Harvard, Vancouver, ISO, and other styles
16

Vaitiekūnas, Mantas, Darius Jegelevičius, Andrius Sakalauskas, and Simonas Grybauskas. "Automatic Method for Bone Segmentation in Cone Beam Computed Tomography Data Set." Applied Sciences 10, no. 1 (December 27, 2019): 236. http://dx.doi.org/10.3390/app10010236.

Full text
Abstract:
Due to technical aspects of Cone Beam Computed Tomography (CBCT), the automatic methods for bone segmentation are not widely used in the clinical practice of endodontics, orthodontics, oral and maxillofacial surgery. The aim of this study was to evaluate method’s accuracy for bone segmentation in CBCT data sets. The sliding three dimensional (3D) window, histogram filter and Otsu’s method were used to implement the automatic segmentation. The results of automatic segmentation were compared with the results of segmentation performed by an experienced oral and maxillofacial surgeon. Twenty patients and their forty CBCT data sets were used in this study (20 preoperative and 20 postoperative). Intraclass Correlation Coefficients (ICC) were calculated to prove the reliability of surgeon segmentations. ICC was 0.958 with 95% confidence interval [0.896 ... 0.983] in preoperative data sets and 0.931 with 95% confidence interval [0.836 ... 0.972] in postoperative data sets. Three basic metrics were used in order to evaluate the accuracy of the automatic method—Dice Similarity Coefficient (DSC), Root Mean Square (RMS), Average Distance Error (ADE) of surfaces mismatch and additional metric in order to evaluate computation time of segmentation was used. The mean value of preoperative DSC was 0.921, postoperative—0.911, the mean value of preoperative RMS was 0.559 mm, postoperative—0.647 mm, the ADE value of preoperative cases was 0.043 mm, postoperative—0.057 mm, the mean computational time to perform the segmentation was 46 s. The automatic method showed clinically acceptable accuracy results and thus can be used as a new tool for automatic bone segmentation in CBCT data. It can be applied in oral and maxillofacial surgery for performance of 3D Virtual Surgical Plan (VSP) or for postoperative follow-up.
APA, Harvard, Vancouver, ISO, and other styles
17

Sudjai, Narumol, Palanan Siriwanarangsun, Nittaya Lektrakul, Pairash Saiviroonporn, Sorranart Maungsomboon, Rapin Phimolsarnti, Apichat Asavamongkolkul, and Chandhanarat Chandhanayingyong. "Robustness of Radiomic Features: Two-Dimensional versus Three-Dimensional MRI-Based Feature Reproducibility in Lipomatous Soft-Tissue Tumors." Diagnostics 13, no. 2 (January 10, 2023): 258. http://dx.doi.org/10.3390/diagnostics13020258.

Full text
Abstract:
This retrospective study aimed to compare the intra- and inter-observer manual-segmentation variability in the feature reproducibility between two-dimensional (2D) and three-dimensional (3D) magnetic-resonance imaging (MRI)-based radiomic features. The study included patients with lipomatous soft-tissue tumors that were diagnosed with histopathology and underwent MRI scans. Tumor segmentation based on the 2D and 3D MRI images was performed by two observers to assess the intra- and inter-observer variability. In both the 2D and the 3D segmentations, the radiomic features were extracted from the normalized images. Regarding the stability of the features, the intraclass correlation coefficient (ICC) was used to evaluate the intra- and inter-observer segmentation variability. Features with ICC > 0.75 were considered reproducible. The degree of feature robustness was classified as low, moderate, or high. Additionally, we compared the efficacy of 2D and 3D contour-focused segmentation in terms of the effects of the stable feature rate, sensitivity, specificity, and diagnostic accuracy of machine learning on the reproducible features. In total, 93 and 107 features were extracted from the 2D and 3D images, respectively. Only 35 features from the 2D images and 63 features from the 3D images were reproducible. The stable feature rate for the 3D segmentation was more significant than for the 2D segmentation (58.9% vs. 37.6%, p = 0.002). The majority of the features for the 3D segmentation had moderate-to-high robustness, while 40.9% of the features for the 2D segmentation had low robustness. The diagnostic accuracy of the machine-learning model for the 2D segmentation was close to that for the 3D segmentation (88% vs. 90%). In both the 2D and the 3D segmentation, the specificity values were equal to 100%. However, the sensitivity for the 2D segmentation was lower than for the 3D segmentation (75% vs. 83%). For the 2D + 3D radiomic features, the model achieved a diagnostic accuracy of 87% (sensitivity, 100%, and specificity, 80%). Both 2D and 3D MRI-based radiomic features of lipomatous soft-tissue tumors are reproducible. With a higher stable feature rate, 3D contour-focused segmentation should be selected for the feature-extraction process.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Haoyu, Zhanfeng Shen, Zihan Zhang, Zeyu Xu, Shuo Li, Shuhui Jiao, and Yating Lei. "Improvement of Region-Merging Image Segmentation Accuracy Using Multiple Merging Criteria." Remote Sensing 13, no. 14 (July 15, 2021): 2782. http://dx.doi.org/10.3390/rs13142782.

Full text
Abstract:
Image segmentation plays a significant role in remote sensing image processing. Among numerous segmentation algorithms, the region-merging segmentation algorithm is widely used due to its well-organized structure and outstanding results. Many merging criteria (MC) were designed to improve the accuracy of region-merging segmentation, but each MC has its own shortcomings, which can cause segmentation errors. Segmentation accuracy can be improved by referring to the segmentation results. To achieve this, an approach for detecting and correcting region-merging image segmentation errors is proposed, and then an iterative optimization model is established. The main contributions of this paper are as follows: (1) The conflict types of matching segment pairs are divided into scale-expression conflict (SEC) and region-ownership conflict (ROC), and ROC is more suitable for optimization. (2) An equal-scale local evaluation method was designed to quantify the optimization potential of ROC. (3) A regional anchoring strategy is proposed to preserve the results of the previous iteration optimization. Three QuickBird satellite images of different land-cover types were used for validating the proposed approach. Both unsupervised and supervised evaluation results prove that the proposed approach can effectively improve segmentation accuracy. All explicit and implicit optimization modes are concluded, which further illustrate the stability of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Mauricaite, Radvile, Ella Mi, Jiarong Chen, Andrew Ho, Lillie Pakzad-Shahabi, and Matthew Williams. "Fully automated deep learning system for detecting sarcopenia on brain MRI in glioblastoma." Neuro-Oncology 23, Supplement_4 (October 1, 2021): iv13. http://dx.doi.org/10.1093/neuonc/noab195.031.

Full text
Abstract:
Abstract Aims Glioblastoma multiforme (GBM) is an aggressive brain malignancy. Performance status is an important prognostic factor but is subjectively evaluated, resulting in inaccuracy. Objective markers of frailty/physical condition, such as measures of skeletal muscle mass can be evaluated on cross-sectional imaging and is associated with cancer survival. In GBM, temporalis muscle has been identified as a skeletal muscle mass surrogate and a prognostic factor. However, current manual muscle quantification is time consuming, limiting clinical adoption. We previously developed a deep learning system for automated temporalis muscle quantification, with high accuracy (Dice coefficient 0.912), and showed muscle cross-sectional area is independently significantly associated with survival in GBM (HR 0.380). However, it required manual selection of the temporalis muscle-containing MRI slice. Thus, in this work we aimed to develop a fully automatic deep-learning system, using the eyeball as an anatomic landmark for automatic slice selection, to quantify temporalis and validate on independent datasets. Method 3D brain MRI scans were obtained from four datasets: our in-house glioblastoma patient dataset, TCGA-GBM, IVY-GAP and REMBRANDT. Manual eyeball and temporalis segmentations were performed on 2D MRI images by two experienced readers. Two neural networks (2D U-Nets) were trained, one to automatically segment the eyeball and the other to segment the temporalis muscle on 2D MRI images using Dice loss function. The cross sectional area of eyeball segmentations were quantified and thresholded, to select the superior orbital MRI slice from each scan. This slice underwent temporalis segmentation, whose cross sectional area was then quantified. Accuracy of automatically predicted eyeball and temporalis segmentations were compared to manual ground truth segmentations on metrics of Dice coefficient, precision, recall and Hausdorff distance. Accuracy of MRI slice selection (by the eyeball segmentation model) for temporalis segmentation was determined by comparing automatically selected slices to slices selected manually by a trained neuro-oncologist. Results 398 images from 185 patients and 366 images from 145 patients were used for the eyeball and temporalis segmentation models, respectively. 61 independent TCGA-GBM scans formed a validation cohort to assess the performance of the full pipeline. The model achieved high accuracy in eyeball segmentation, with test set Dice coefficient of 0.9029 ± 0.0894, precision of 0.8842 ± 0.0992, recall of 0.9297 ± 0.6020 and Hausdorff distance of 2.8847 ± 0.6020. High segmentation accuracy was also achieved by the temporalis segmentation model, with Dice coefficient of 0.8968 ± 0.0375, precision of 0.8877 ± 0.0679, recall of 0.9118 ± 0.0505 and Hausdorff distance of 1.8232 ± 0.3263 in the test set. 96.1% of automatically selected slices for temporalis segmentation were within 2 slices of the manually selected slice. Conclusion Temporalis muscle cross-sectional area can be rapidly and accurately assessed from 3D MRI brain scans using a deep learning-based system in a fully automated pipeline. Combined with our and others’ previous results that demonstrate the prognostic significance of temporalis cross-sectional area and muscle width, our findings suggest a role for deep learning in muscle mass and sarcopenia screening in GBM, with the potential to add significant value to routine imaging. Possible clinical applications include risk profiling, treatment stratification and informing interventions for muscle preservation. Further work will be to validate the prognostic value of temporalis muscle cross sectional area measurements generated by our fully automatic deep learning system in the multiple in-house and external datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Goodman, Allen C., and Thomas G. Thibodeau. "Housing market segmentation and hedonic prediction accuracy." Journal of Housing Economics 12, no. 3 (September 2003): 181–201. http://dx.doi.org/10.1016/s1051-1377(03)00031-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Pan, Yanyan, Huiping Zhang, Jinsuo Yang, Jing Guo, Zhiguo Yang, Jianbing Wang, and Ge Song. "Identification and Diagnosis of Cerebral Stroke through Deep Convolutional Neural Network-Based Multimodal MRI Images." Contrast Media & Molecular Imaging 2021 (July 20, 2021): 1–8. http://dx.doi.org/10.1155/2021/7598613.

Full text
Abstract:
This study aimed to explore the application value of multimodal magnetic resonance imaging (MRI) images based on the deep convolutional neural network (Conv.Net) in the diagnosis of strokes. Specifically, four automatic segmentation algorithms were proposed to segment multimodal MRI images of stroke patients. The segmentation effects were evaluated factoring into DICE, accuracy, sensitivity, and segmentation distance coefficient. It was found that although two-dimensional (2D) full convolutional neural network-based segmentation algorithm can locate and segment the lesion, its accuracy was low; the three-dimensional one exhibited higher accuracy, with various objective indicators improved, and the segmentation accuracy of the training set and the test set was 0.93 and 0.79, respectively, meeting the needs of automatic diagnosis. The asymmetric 3D residual U-Net network had good convergence and high segmentation accuracy, and the 3D deep residual network proposed on its basis had good segmentation coefficients, which can not only ensure segmentation accuracy but also avoid network degradation problems. In conclusion, the Conv.Net model can accurately segment the foci of patients with ischemic stroke and is suggested in clinic.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Zhi-Jie. "A Method of Improving Accuracy in Expression Recognition." European Journal of Electrical Engineering and Computer Science 6, no. 3 (June 13, 2022): 27–30. http://dx.doi.org/10.24018/ejece.2022.6.3.440.

Full text
Abstract:
In order to improve the accuracy of a special kind of facial expression recognition problem, a method for precise face detection and segmentation combined with the particle swarm optimization is proposed. The method uses three key technologies: skin color segmentation, particle swarm search and curve approximation. Firstly, the face contour is roughly obtained through skin color segmentation. Secondly, the accurate face position is detected by particle swarm optimization. Thirdly, the face contour is reduced and regulated further via the curve approximation. The experimental results show that this method can eliminate the interference factor, and then improve the accuracy of expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
23

Harkey, Matthew S., Nicholas Michel, Christopher Kuenze, Ryan Fajardo, Matt Salzler, Jeffrey B. Driban, and Ilker Hacihaliloglu. "Validating a Semi-Automated Technique for Segmenting Femoral Articular Cartilage on Ultrasound Images." CARTILAGE 13, no. 2 (April 2022): 194760352210930. http://dx.doi.org/10.1177/19476035221093069.

Full text
Abstract:
Objective To validate a semi-automated technique to segment ultrasound-assessed femoral cartilage without compromising segmentation accuracy to a traditional manual segmentation technique in participants with an anterior cruciate ligament injury (ACL). Design We recruited 27 participants with a primary unilateral ACL injury at a pre-operative clinic visit. One investigator performed a transverse suprapatellar ultrasound scan with the participant’s ACL injured knee in maximum flexion. Three femoral cartilage ultrasound images were recorded. A single expert reader manually segmented the femoral cartilage cross-sectional area in each image. In addition, we created a semi-automatic program to segment the cartilage using a random walker-based method. We quantified the average cartilage thickness and echo-intensity for the manual and semi-automated segmentations. Intraclass correlation coefficients (ICC2,k) and Bland-Altman plots were used to validate the semi-automated technique to the manual segmentation for assessing average cartilage thickness and echo-intensity. A dice correlation coefficient was used to quantify the overlap between the segmentations created with the semi-automated and manual techniques. Results For average cartilage thickness, there was excellent reliability (ICC2,k = 0.99) and a small mean difference (+0.8%) between the manual and semi-automated segmentations. For average echo-intensity, there was excellent reliability (ICC2,k = 0.97) and a small mean difference (−2.5%) between the manual and semi-automated segmentations. The average dice correlation coefficient between the manual segmentation and semi-automated segmentation was 0.90, indicating high overlap between techniques. Conclusions Our novel semi-automated segmentation technique is a valid method that requires less technical expertise and time than manual segmentation in patients after ACL injury.
APA, Harvard, Vancouver, ISO, and other styles
24

Arora, Jyoti, and Meena Tushir. "Intuitionistic Level Set Segmentation for Medical Image Segmentation." Recent Advances in Computer Science and Communications 13, no. 5 (November 5, 2020): 1039–46. http://dx.doi.org/10.2174/2213275912666190218150045.

Full text
Abstract:
Introduction: Image segmentation is one of the basic practices that involve dividing an image into mutually exclusive partitions. Learning how to partition an image into different segments is considered as one of the most critical and crucial step in the area of medical image analysis. Objective: The primary objective of the work is to design an integrated approach for automating the process of level set segmentation for medical image segmentation. This method will help to overcome the problem of manual initialization of parameters. Methods: In the proposed method, input image is simplified by the process of intuitionistic fuzzification of an image. Further segmentation is done by intuitionistic based clustering technique incorporated with local spatial information (S-IFCM). The controlling parameters of level set method are automated by S-IFCM, for defining anatomical boundaries. Results: Experimental results were carried out on MRI and CT-scan images of brain and liver. The results are compared with existing Fuzzy Level set segmentation; Spatial Fuzzy Level set segmentation using MSE, PSNR and Segmentation Accuracy. Qualitatively results achieved after proposed segmentation technique shows more clear definition of boundaries. The attain PSNR and MSE value of propose algorithm proves the robustness of algorithm. Segmentation accuracy is calculated for the segmentation results of the T-1 weighted axial slice of MRI image with 0.909 value. Conclusion: The proposed method shows good accuracy for the segmentation of medical images. This method is a good substitute for the segmentation of different clinical images with different modalities and proves to give better result than fuzzy technique.
APA, Harvard, Vancouver, ISO, and other styles
25

Warfield, Simon K., Kelly H. Zou, and William M. Wells. "Validation of image segmentation by estimating rater bias and variance." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, no. 1874 (April 11, 2008): 2361–75. http://dx.doi.org/10.1098/rsta.2008.0040.

Full text
Abstract:
The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a ‘ground truth’ or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare with segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically, these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labelling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, among others, surface, distance transform or level-set representations of segmentations, and can be used to assess whether or not a rater consistently overestimates or underestimates the position of a boundary.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Justin J., Hyejin Nam, Neelambar R. Kaipatur, Paul W. Major, Carlos Flores-Mir, Manuel O. Lagravere, and Daniel L. Romanyk. "Reliability and accuracy of segmentation of mandibular condyles from different three-dimensional imaging modalities: a systematic review." Dentomaxillofacial Radiology 49, no. 5 (July 2020): 20190150. http://dx.doi.org/10.1259/dmfr.20190150.

Full text
Abstract:
Objective: To critically synthesize the literature surrounding segmentation of the mandibular condyle using three-dimensional imaging modalities. Specifically, analyzing the reliability and accuracy of methods used for three-dimensional condyle segmentation. Methods: Three electronic databases were searched for studies reporting the reliability and accuracy of various methods used to segment mandibular condyles from three-dimensional imaging modalities. Two authors independently reviewed articles for eligibility and data extraction. Results: Nine studies fulfilled the inclusion criteria. Eight studies assessed the condylar segmentation from CBCT images and limited studies were available on non-CBCT three-dimensional imaging modalities. Threshold-based volume segmentation, manual segmentation, and semi-automatic segmentation techniques were presented. Threshold-based volume segmentation reported higher accuracy when completed by an experienced technician compared to clinicians. Adequate reliability and accuracy were observed in manual segmentation. Although adequate reliability was reported in semi-automatic segmentation, data on its accuracy were lacking. Conclusion: A definitive conclusion with regards to which current technique is most reliable and accurate to efficiently segment the mandibular condyle cannot be made with the currently available evidence. This is especially true in terms of non-CBCT imaging modalities with very limited literature available.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhao, Jianhui, Erqian Dong, Mingui Sun, Wenyan Jia, Dengyi Zhang, and Zhiyong Yuan. "Sample Training Based Wildfire Segmentation by 2D Histogramθ-Division with Minimum Error." Scientific World Journal 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/572393.

Full text
Abstract:
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogramθ-division and minimum error. Based on minimum error principle and 2D color histogram, theθ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluateθ-division segmentations, and the optimal angleθis determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with bothθ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation.
APA, Harvard, Vancouver, ISO, and other styles
28

Bruździński, Tomasz, Adam Krzyżak, Thomas Fevens, and Łukasz Jeleń. "Web–Based Framework For Breast Cancer Classification." Journal of Artificial Intelligence and Soft Computing Research 4, no. 2 (April 1, 2014): 149–62. http://dx.doi.org/10.1515/jaiscr-2015-0005.

Full text
Abstract:
Abstract The aim of this work is to create a web-based system that will assist its users in the cancer diagnosis process by means of automatic classification of cytological images obtained during fine needle aspiration biopsy. This paper contains a description of the study on the quality of the various algorithms used for the segmentation and classification of breast cancer malignancy. The object of the study is to classify the degree of malignancy of breast cancer cases from fine needle aspiration biopsy images into one of the two classes of malignancy, high or intermediate. For that purpose we have compared 3 segmentation methods: k-means, fuzzy c-means and watershed, and based on these segmentations we have constructed a 25–element feature vector. The feature vector was introduced as an input to 8 classifiers and their accuracy was checked. The results show that the highest classification accuracy of 89.02 % was recorded for the multilayer perceptron. Fuzzy c–means proved to be the most accurate segmentation algorithm, but at the same time it is the most computationally intensive among the three studied segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Zongxin, Xu Yang, Long Wu, Jiemin Hu, Bo Zou, Yong Zhang, and Jianlong Zhang. "Pre-Inpainting Convolutional Skip Triple Attention Segmentation Network for AGV Lane Detection in Overexposure Environment." Applied Sciences 12, no. 20 (October 21, 2022): 10675. http://dx.doi.org/10.3390/app122010675.

Full text
Abstract:
Visual navigation is an important guidance method for industrial automated guided vehicles (AGVs). In the actual guidance, the overexposure environment may be encountered by the AGV lane image, which seriously reduces the accuracy of lane detection. Although the image segmentation method based on deep learning is widely used in lane detection, it cannot solve the problem of overexposure of lane images. At the same time, the requirements of segmentation accuracy and inference speed cannot be met simultaneously by existing segmentation networks. Aiming at the problem of incomplete lane segmentation in an overexposure environment, a lane detection method combining image inpainting and image segmentation is proposed. In this method, the overexposed lane image is repaired and reconstructed by the MAE network, and then the image is input into the image segmentation network for lane segmentation. In addition, a convolutional skip triple attention (CSTA) image segmentation network is proposed. CSTA improves the inference speed of the model under the premise of ensuring high segmentation accuracy. Finally, the lane segmentation performance of the proposed method is evaluated in three image segmentation evaluation metrics (IoU, F1-score, and PA) and inference time. Experimental results show that the proposed CSTA network has higher segmentation accuracy and faster inference speed.
APA, Harvard, Vancouver, ISO, and other styles
30

Platero, Carlos, and M. Carmen Tobar. "A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans." Computational and Mathematical Methods in Medicine 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/182909.

Full text
Abstract:
An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results.
APA, Harvard, Vancouver, ISO, and other styles
31

Alhassan, Afnan M. "Enhanced Fuzzy Elephant Herding Optimization-Based OTSU Segmentation and Deep Learning for Alzheimer’s Disease Diagnosis." Mathematics 10, no. 8 (April 11, 2022): 1259. http://dx.doi.org/10.3390/math10081259.

Full text
Abstract:
Several neurological illnesses and diseased sites have been studied, along with the anatomical framework of the brain, using structural MRI (sMRI). It is critical to diagnose Alzheimer’s disease (AD) patients in a timely manner to implement preventative treatments. The segmentation of brain anatomy and categorization of AD have received increased attention since they can deliver good findings spanning a vast range of information. The first research gap considered in this work is the real-time efficiency of OTSU segmentation, which is not high, despite its simplicity and good accuracy. A second issue is that feature extraction could be automated by implementing deep learning techniques. To improve picture segmentation’s real-timeliness, enhanced fuzzy elephant herding optimization (EFEHO) was used for OTSU segmentation, and named EFEHO-OTSU. The main contribution of this work is twofold. One is utilizing EFEHO in the recommended technique to seek the optimal segmentation threshold for the OTSU method. Second, dual attention multi-instance deep learning network (DA-MIDL) is recommended for the timely diagnosis of AD and its prodromal phase, mild cognitive impairment (MCI). Tests show that this technique converges faster and takes less time than the classic OTSU approach without reducing segmentation performance. This study develops a valuable tool for quick picture segmentation with good real-time efficiency. Compared to numerous conventional techniques, the suggested study attains improved categorization performance regarding accuracy and transferability.
APA, Harvard, Vancouver, ISO, and other styles
32

Duong, Felix, Michael Gadermayr, Dorit Merhof, Christiane Kuhl, Philipp Bruners, Sven H. Loosen, Christoph Roderburg, Daniel Truhn, and Maximilian F. Schulze-Hagen. "Automated major psoas muscle volumetry in computed tomography using machine learning algorithms." International Journal of Computer Assisted Radiology and Surgery 17, no. 2 (December 20, 2021): 355–61. http://dx.doi.org/10.1007/s11548-021-02539-2.

Full text
Abstract:
Abstract Purpose The psoas major muscle (PMM) volume serves as an opportunistic imaging marker in cross-sectional imaging datasets for various clinical applications. Since manual segmentation is time consuming, two different automated segmentation methods, a generative adversarial network architecture (GAN) and a multi-atlas segmentation (MAS), as well as a combined approach of both, were investigated in terms of accuracy of automated volumetrics in given CT datasets. Materials and methods The bilateral PMM was manually segmented by a radiologist in 34 abdominal CT scans, resulting in 68 single 3D muscle segmentations as training data. Three different methods were tested for their ability to generate automated image segmentations: a GAN- and MAS-based approach and a combined approach of both methods (COM). Bilateral PMM volume (PMMV) was calculated in cm3 by each algorithm for every CT. Results were compared to the corresponding ground truth using the Dice similarity coefficient (DSC), Spearman’s correlation coefficient and Wilcoxon signed-rank test. Results Mean PMMV was 239 ± 7.0 cm3 and 308 ± 9.6 cm3, 306 ± 9.5 cm3 and 243 ± 7.3 cm3 for the CNN, MAS and COM, respectively. Compared to the ground truth the CNN and MAS overestimated the PMMV significantly (+ 28.9% and + 28.0%, p < 0.001), while results of the COM were quite accurate (+ 0.7%, p = 0.33). Spearman’s correlation coefficients were 0.38, 0.62 and 0.73, and the DSCs were 0.75 [95%CI: 0.56–0.88], 0.73 [95%CI: 0.54–0.85] and 0.82 [95%CI: 0.65–0.90] for the CNN, MAS and COM, respectively. Conclusion The combined approach was able to efficiently exploit the advantages of both methods (GAN and MAS), resulting in a significantly higher accuracy in PMMV predictions compared to the isolated implementations of both methods. Even with the relatively small set of training data, the segmentation accuracy of this hybrid approach was relatively close to that of the radiologist.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Zhenwei, Mengli Jia, Xiaoli Yang, and Mengying Xu. "Blood Vessel Segmentation of Retinal Image Based on Dense-U-Net Network." Micromachines 12, no. 12 (November 29, 2021): 1478. http://dx.doi.org/10.3390/mi12121478.

Full text
Abstract:
The accurate segmentation of retinal blood vessels in fundus is of great practical significance to help doctors diagnose fundus diseases. Aiming to solve the problems of serious segmentation errors and low accuracy in traditional retinal segmentation, a scheme based on the combination of U-Net and Dense-Net was proposed. Firstly, the vascular feature information was enhanced by fusion limited contrast histogram equalization, median filtering, data normalization and multi-scale morphological transformation, and the artifact was corrected by adaptive gamma correction. Secondly, the randomly extracted image blocks are used as training data to increase the data and improve the generalization ability. Thirdly, stochastic gradient descent was used to optimize the Dice loss function to improve the segmentation accuracy. Finally, the Dense-U-net model was used for segmentation. The specificity, accuracy, sensitivity and AUC of this algorithm are 0.9896, 0.9698, 0.7931, 0.8946 and 0.9738, respectively. The proposed method improves the segmentation accuracy of vessels and the segmentation of small vessels.
APA, Harvard, Vancouver, ISO, and other styles
34

Cruz-Aceves, I., J. G. Avina-Cervantes, J. M. Lopez-Hernandez, M. G. Garcia-Hernandez, M. Torres-Cisneros, H. J. Estrada-Garcia, and A. Hernandez-Aguirre. "Automatic Image Segmentation Using Active Contours with Univariate Marginal Distribution." Mathematical Problems in Engineering 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/419018.

Full text
Abstract:
This paper presents a novel automatic image segmentation method based on the theory of active contour models and estimation of distribution algorithms. The proposed method uses the univariate marginal distribution model to infer statistical dependencies between the control points on different active contours. These contours have been generated through an alignment process of reference shape priors, in order to increase the exploration and exploitation capabilities regarding different interactive segmentation techniques. This proposed method is applied in the segmentation of the hollow core in microscopic images of photonic crystal fibers and it is also used to segment the human heart and ventricular areas from datasets of computed tomography and magnetic resonance images, respectively. Moreover, to evaluate the performance of the medical image segmentations compared to regions outlined by experts, a set of similarity measures has been adopted. The experimental results suggest that the proposed image segmentation method outperforms the traditional active contour model and the interactive Tseng method in terms of segmentation accuracy and stability.
APA, Harvard, Vancouver, ISO, and other styles
35

MASSOPTIER, LAURENT, AVISHKAR MISRA, ARCOT SOWMYA, and SERGIO CASCIARO. "COMBINING GRAPH-CUT TECHNIQUE AND ANATOMICAL KNOWLEDGE FOR AUTOMATIC SEGMENTATION OF LUNGS AFFECTED BY DIFFUSE PARENCHYMAL DISEASE IN HRCT IMAGES." International Journal of Image and Graphics 11, no. 04 (October 2011): 509–29. http://dx.doi.org/10.1142/s0219467811004202.

Full text
Abstract:
Accurate and automated lung segmentation in high-resolution computed tomography (HRCT) is highly challenged by the presence of pathologies affecting lung parenchyma appearance and borders. The algorithm presented employs an anatomical model-driven approach and systematic incremental knowledge acquisition to produce coarse lung delineation, used as initialization for the graph-cut algorithm. The proposed method is evaluated on a 49 HRCT cases dataset including various lung disease patterns. The accuracy of the method is assessed using dice similarity coefficient (DSC) and shape differentiation metrics (d mean , d rms ), by comparing the outputs of automatic lung segmentations and manual ones. The proposed automatic method demonstrates high segmentation accuracy ( DSC = 96.64%, d mean = 1.75 mm, d rms = 3.27 mm) with low variation that depends on the lung disease pattern. It also presents good improvement over the initial lung segmentation (Δ DSC = 4.74%, Δd mean = -3.67 mm, Δd rms = -6.25 mm), including impressive amelioration (maximum values of Δ DSC = 58.22% and Δd mean = -78.66 mm) when the anatomy-driven algorithm reaches its limit. Segmentation evaluation shows that the method can accurately segment lungs even in the presence of disease patterns, with some limitations in the apices and bases of lungs. Therefore, the developed automatic segmentation method is a good candidate for the first stage of a computer-aided diagnosis system for diffuse lung diseases.
APA, Harvard, Vancouver, ISO, and other styles
36

Ming, Dongping, Wen Zhou, Lu Xu, Min Wang, and Yanni Ma. "Coupling Relationship Among Scale Parameter, Segmentation Accuracy, and Classification Accuracy In GeOBIA." Photogrammetric Engineering & Remote Sensing 84, no. 11 (November 1, 2018): 681–93. http://dx.doi.org/10.14358/pers.84.11.681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jiang, Yun, Falin Wang, Jing Gao, and Simin Cao. "Multi-Path Recurrent U-Net Segmentation of Retinal Fundus Image." Applied Sciences 10, no. 11 (May 29, 2020): 3777. http://dx.doi.org/10.3390/app10113777.

Full text
Abstract:
Diabetes can induce diseases including diabetic retinopathy, cataracts, glaucoma, etc. The blindness caused by these diseases is irreversible. Early analysis of retinal fundus images, including optic disc and optic cup detection and retinal blood vessel segmentation, can effectively identify these diseases. The existing methods lack sufficient discrimination power for the fundus image and are easily affected by pathological regions. This paper proposes a novel multi-path recurrent U-Net architecture to achieve the segmentation of retinal fundus images. The effectiveness of the proposed network structure was proved by two segmentation tasks: optic disc and optic cup segmentation and retinal vessel segmentation. Our method achieved state-of-the-art results in the segmentation of the Drishti-GS1 dataset. Regarding optic disc segmentation, the accuracy and Dice values reached 0.9967 and 0.9817, respectively; as regards optic cup segmentation, the accuracy and Dice values reached 0.9950 and 0.8921, respectively. Our proposed method was also verified on the retinal blood vessel segmentation dataset DRIVE and achieved a good accuracy rate.
APA, Harvard, Vancouver, ISO, and other styles
38

Jia, Zheshu, and Deyun Chen. "A Segmentation Algorithm of Image Semantic Sequence Data Based on Graph Convolution Network." Security and Communication Networks 2021 (April 22, 2021): 1–11. http://dx.doi.org/10.1155/2021/5596820.

Full text
Abstract:
Image semantic data have multilevel feature information. In the actual segmentation, the existing segmentation algorithms have some limitations, resulting in the fact that the final segmentation accuracy is too small. To solve this problem, a segmentation algorithm of image semantic sequence data based on graph convolution network is constructed. The graph convolution network is used to construct the image search process. The semantic sequence data are extracted. After the qualified data points are accumulated, the gradient amplitude forms complete rotation field and no scatter field in the diffusion process, which enhances the application scope of the algorithm, controls the accuracy of the segmentation algorithm, and completes the construction of the data segmentation algorithm. After the experimental dataset is prepared and the semantic segmentation direction is defined, we compare our method with four methods. The results show that the segmentation algorithm designed in this paper has the highest accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Deng, Hanbing, Tongyu Xu, Yuncheng Zhou, and Teng Miao. "Depth Density Achieves a Better Result for Semantic Segmentation with the Kinect System." Sensors 20, no. 3 (February 3, 2020): 812. http://dx.doi.org/10.3390/s20030812.

Full text
Abstract:
Image segmentation is one of the most important methods for animal phenome research. Since the advent of deep learning, many researchers have looked at multilayer convolutional neural networks to solve the problems of image segmentation. A network simplifies the task of image segmentation with automatic feature extraction. Many networks struggle to output accurate details when dealing with pixel-level segmentation. In this paper, we propose a new concept: Depth density. Based on a depth image, produced by a Kinect system, we design a new function to calculate the depth density value of each pixel and bring this value back to the result of semantic segmentation for improving the accuracy. In the experiment, we choose Simmental cattle as the target of image segmentation and fully convolutional networks (FCN) as the verification networks. We proved that depth density can improve four metrics of semantic segmentation (pixel accuracy, mean accuracy, mean intersection over union, and frequency weight intersection over union) by 2.9%, 0.3%, 11.4%, and 5.02%, respectively. The result shows that depth information produced by Kinect can improve the accuracy of the semantic segmentation of FCN. This provides a new way of analyzing the phenotype information of animals.
APA, Harvard, Vancouver, ISO, and other styles
40

MARTIN, GALE L., MOSFEQ RASHID, and JAMES A. PITTMAN. "INTEGRATED SEGMENTATION AND RECOGNITION THROUGH EXHAUSTIVE SCANS OR LEARNED SACCADIC JUMPS." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 04 (August 1993): 831–47. http://dx.doi.org/10.1142/s021800149300042x.

Full text
Abstract:
This paper advances two approaches to integrating handwritten character segmentation and recognition within one system, where the underlying function is learned by a backpropagation neural network. Integrated segmentation and recognition is necessary when characters overlap or touch, or when an individual character is broken up. The first approach exhaustively scans a field of characters, effectively creating a possible segmentation at each scan point. A neural net is trained to both identify when its input window is centered over a character, and if it is, to classify the character. This approach is similar to most recently advanced approaches to integrating segmentation and recognition, and has the common flaw of generating too many possible segmentations to be truly efficient. The second approach overcomes this weakness without reducing accuracy by training a neural network to mimic the ballistic and corrective saccades (eye movements) of human vision. A single neural net learns to jump from character to character, making corrective jumps when necessary, and to classify the centered character when properly fixated. The significant aspect of this system is that the neural net learns to both control what is in its input window as well as to recognize what is in the window. High accuracy results are reported for a standard database of handprinted digits for both approaches.
APA, Harvard, Vancouver, ISO, and other styles
41

Pijpker, Peter A. J., Tim S. Oosterhuis, Max J. H. Witjes, Chris Faber, Peter M. A. van Ooijen, Jiří Kosinka, Jos M. A. Kuijlen, Rob J. M. Groen, and Joep Kraeima. "A semi-automatic seed point-based method for separation of individual vertebrae in 3D surface meshes: a proof of principle study." International Journal of Computer Assisted Radiology and Surgery 16, no. 9 (May 27, 2021): 1447–57. http://dx.doi.org/10.1007/s11548-021-02407-z.

Full text
Abstract:
Abstract Purpose The purpose of this paper is to present and validate a new semi-automated 3D surface mesh segmentation approach that optimizes the laborious individual human vertebrae separation in the spinal virtual surgical planning workflow and make a direct accuracy and segmentation time comparison with current standard segmentation method. Methods The proposed semi-automatic method uses the 3D bone surface derived from CT image data for seed point-based 3D mesh partitioning. The accuracy of the proposed method was evaluated on a representative patient dataset. In addition, the influence of the number of used seed points was studied. The investigators analyzed whether there was a reduction in segmentation time when compared to manual segmentation. Surface-to-surface accuracy measurements were applied to assess the concordance with the manual segmentation. Results The results demonstrated a statically significant reduction in segmentation time, while maintaining a high accuracy compared to the manual segmentation. A considerably smaller error was found when increasing the number of seed points. Anatomical regions that include articulating areas tend to show the highest errors, while the posterior laminar surface yielded an almost negligible error. Conclusion A novel seed point initiated surface based segmentation method for the laborious individual human vertebrae separation was presented. This proof-of-principle study demonstrated the accuracy of the proposed method on a clinical CT image dataset and its feasibility for spinal virtual surgical planning applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Yong, Jing Wen Xu, Jun Fang Zhao, Yu Dan Zhao, and Xin Li. "An Improved Mean Shift Segmentation Method of High-Resolution Remote Sensing Image Based on LBP and Canny Features." Applied Mechanics and Materials 713-715 (January 2015): 1589–92. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.1589.

Full text
Abstract:
Mean shift algorithm is a robust approach toward feature space analysis, which has been wildly used for natural scene image and medical image segmentation. Due to fuzzy boundary and low accuracy of Mean shift segmentation method, this paper puts forward to an improved Mean shift segmentation method of high-resolution remote sensing image based on LBP and Canny features. The results show that this improved Mean shift segmentation access can enhance segmentation accuracy compared to the traditional Mean shift.
APA, Harvard, Vancouver, ISO, and other styles
43

Hagerty, Michael R. "Can segmentation improve predictive accuracy in conjoint analysis?" Journal of the Academy of Marketing Science 21, no. 4 (September 1993): 353–55. http://dx.doi.org/10.1007/bf02894528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sridhar, Bokka. "Investigations of Medical Image Segmentation Methods with Inclusion Mathematical Morphological Operations." Traitement du Signal 38, no. 5 (October 31, 2021): 1531–40. http://dx.doi.org/10.18280/ts.380530.

Full text
Abstract:
Medical image segmentation research is becoming efficient by using mathematical morphological (MM) operators. There are different methods in image segmentation such as supervised and unsupervised segmentations. The MM operators are much effective, in developing a computer aided diagnosis (CAD) system. Medical image such as mammograms, generally they are of low contrast, such that radiologists face difficulties in observing the results. Due to this, diagnosis fails to generate high rate false positives and false negatives. In the proposed work improvement of quality of image segmentation with inclusion of morphological operations with other methods such as watershed transform, fuzzy logic based techniques, curvelets and MRF to detect the masses and calcifications in mammograms. Classification of masses and evaluation of segmentation process are done with artificial neural network and other performance metrics. These methods lead to increase in the accuracy, specificity and sensitivity of mammography and reduce unnecessary biopsies.
APA, Harvard, Vancouver, ISO, and other styles
45

MPORAS, IOSIF, TODOR GANCHEV, and NIKOS FAKOTAKIS. "PHONETIC SEGMENTATION OF EMOTIONAL SPEECH WITH HMM-BASED METHODS." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 07 (November 2010): 1159–79. http://dx.doi.org/10.1142/s0218001410008329.

Full text
Abstract:
In the present work we address the problem of phonetic segmentation of emotional speech. Investigating various traditional and recent HMM-based methods for speech segmentation, which we elaborated for the specifics of emotional speech segmentation, we demonstrate that the HMM-based method with hybrid embedded-isolated training offers advantageous segmentation accuracy, when compared to other HMM-based models used so far. The increased precision of the segmentation is a consequence of the iterative training process employed in the hybrid-training method, which refines the model parameters and the estimated phonetic boundaries taking advantage of the estimations made at previous iterations. Furthermore, we demonstrate the benefits of using purposely-built models for each target category of emotional speech, when compared to the case of one common model built solely from neutral speech. This advantage, in terms of segmentation accuracy, justifies the effort for creating and employing the purposely-built segmentation models per emotion category, since it significantly improves the overall segmentation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Weishaupt, L. L., T. Vuong, A. Thibodeau-Antonacci, A. Garant, K. S. Singh, C. Miller, A. Martin, and S. Enger. "A121 QUANTIFYING INTER-OBSERVER VARIABILITY IN THE SEGMENTATION OF RECTAL TUMORS IN ENDOSCOPY IMAGES AND ITS EFFECTS ON DEEP LEARNING." Journal of the Canadian Association of Gastroenterology 5, Supplement_1 (February 21, 2022): 140–42. http://dx.doi.org/10.1093/jcag/gwab049.120.

Full text
Abstract:
Abstract Background Tumor delineation in endoscopy images is a crucial part of clinical diagnoses and treatment planning for rectal cancer patients. However, it is challenging to detect and adequately determine the size of tumors in these images, especially for inexperienced clinicians. This motivates the need for a standardized, automated segmentation method. While deep learning has proven to be a powerful tool for medical image segmentation, it requires a large quantity of high-quality annotated training data. Since the annotation of endoscopy images is prone to high inter-observer variability, creating a robust unbiased deep learning model for this task is challenging. Aims To quantify the inter-observer variability in the manual segmentation of tumors in endoscopy images of rectal cancer patients and investigate an automated approach using deep learning. Methods Three gastrointestinal physicians and radiation oncologists (G1, G2, and G3) segmented 2833 endoscopy images into tumor and non-tumor regions. The whole image classifications and the pixelwise classifications into tumor and non-tumor were compared to quantify the inter-observer variability. Each manual annotator is from a different institution. Three different deep learning architectures (FCN32, U-Net, and SegNet) were trained on the binary contours created by G2. This naive approach investigates the effectiveness of neglecting any information about the uncertainty associated with the task of tumor delineation. Finally, segmentations from G2 and the deep learning models’ predictions were compared against ground truth labels from G1 and G3, and accuracy, sensitivity, specificity, precision, and F1 scores were computed for images where both segmentations contained tumors. Results The deep-learning segmentation took less than 1 second, while manual segmentation took approximately 10 seconds per image. There was significant inter-observer variability for the whole-image classifications made by the manual annotators (Figure 1A). The segmentation scores achieved by the deep learning models (SegNet F1:0.80±0.08) were comparable to the inter-observer variability for the pixel-wise image classification (Figure 1B). Conclusions The large inter-observer variability observed in this study indicates a need for an automated segmentation tool for tumors in endoscopy images of rectal cancer patients. While deep learning models trained on a single observer’s labels can segment tumors with an accuracy similar to the inter-observer variability, these models do not accurately reflect the intrinsic uncertainty associated with tumor delineation. In our ongoing studies, we investigate training a model with all observers’ contours to reflect the uncertainty associated with the tumor segmentations. Funding Agencies CIHRNSERC
APA, Harvard, Vancouver, ISO, and other styles
47

YANG, YIPING, YIPING CAO, and WENXIAN SHI. "A METHOD OF LEUKOCYTE SEGMENTATION BASED ON S COMPONENT AND B COMPONENT IMAGES." Journal of Innovative Optical Health Sciences 07, no. 01 (January 2014): 1450007. http://dx.doi.org/10.1142/s1793545814500072.

Full text
Abstract:
A leukocyte segmentation method based on S component and B component images is proposed. Threshold segmentation operation is applied to get two binary images in S component and B component images. The samples used in this study are peripheral blood smears. It is easy to find from the two binary images that gray values are the same at every corresponding pixels in the leukocyte cytoplasm region, but opposite in the other regions. The feature shows that "IMAGE AND" operation can be employed on the two binary images to segment the cytoplasm region of leukocyte. By doing "IMAGE XOR" operation between cytoplasm region and nucleus region, the leukocyte segmentation can be retrieved effectively. The segmentation accuracy is evaluated by comparing the segmentation result of the proposed method with the manual segmentation by a hematologist. Experiment results show that the proposed method is of a higher segmentation accuracy and it also performs well when leukocytes overlap with erythrocytes. The average segmentation accuracy of the proposed method reaches 97.7% for segmenting five types of leukocyte. Good segmentation results provide an important foundation for leukocytes automatic recognition.
APA, Harvard, Vancouver, ISO, and other styles
48

Ryselis, Karolis, Tomas Blažauskas, Robertas Damaševičius, and Rytis Maskeliūnas. "Agrast-6: Abridged VGG-Based Reflected Lightweight Architecture for Binary Segmentation of Depth Images Captured by Kinect." Sensors 22, no. 17 (August 24, 2022): 6354. http://dx.doi.org/10.3390/s22176354.

Full text
Abstract:
Binary object segmentation is a sub-area of semantic segmentation that could be used for a variety of applications. Semantic segmentation models could be applied to solve binary segmentation problems by introducing only two classes, but the models to solve this problem are more complex than actually required. This leads to very long training times, since there are usually tens of millions of parameters to learn in this category of convolutional neural networks (CNNs). This article introduces a novel abridged VGG-16 and SegNet-inspired reflected architecture adapted for binary segmentation tasks. The architecture has 27 times fewer parameters than SegNet but yields 86% segmentation cross-intersection accuracy and 93% binary accuracy. The proposed architecture is evaluated on a large dataset of depth images collected using the Kinect device, achieving an accuracy of 99.25% in human body shape segmentation and 87% in gender recognition tasks.
APA, Harvard, Vancouver, ISO, and other styles
49

Desser, Dmitriy, Francisca Assunção, Xiaoguang Yan, Victor Alves, Henrique M. Fernandes, and Thomas Hummel. "Automatic Segmentation of the Olfactory Bulb." Brain Sciences 11, no. 9 (August 28, 2021): 1141. http://dx.doi.org/10.3390/brainsci11091141.

Full text
Abstract:
The olfactory bulb (OB) has an essential role in the human olfactory pathway. A change in olfactory function is associated with a change of OB volume. It has been shown to predict the prognosis of olfactory loss and its volume is a biomarker for various neurodegenerative diseases, such as Alzheimer’s disease. Thus far, obtaining an OB volume for research purposes has been performed by manual segmentation alone; a very time-consuming and highly rater-biased process. As such, this process dramatically reduces the ability to produce fair and reliable comparisons between studies, as well as the processing of large datasets. Our study aims to solve this by proposing a novel methodological framework for the unbiased measurement of OB volume. In this paper, we present a fully automated tool that successfully performs such a task, accurately and quickly. In order to develop a stable and versatile algorithm and to train the neural network, we used four datasets consisting of whole-brain T1 and high-resolution T2 MRI scans, as well as the corresponding clinical information of the subject’s smelling ability. One dataset contained data of patients suffering from anosmia or hyposmia (N = 79), and the other three datasets contained data of healthy controls (N = 91). First, the manual segmentation labels of the OBs were created by two experienced raters, independently and blinded. The algorithm consisted of the following four different steps: (1) multimodal data co-registration of whole-brain T1 images and T2 images, (2) template-based localization of OBs, (3) bounding box construction, and lastly, (4) segmentation of the OB using a 3D-U-Net. The results from the automated segmentation algorithm were tested on previously unseen data, achieving a mean dice coefficient (DC) of 0.77 ± 0.05, which is remarkably convergent with the inter-rater DC of 0.79 ± 0.08 estimated for the same cohort. Additionally, the symmetric surface distance (ASSD) was 0.43 ± 0.10. Furthermore, the segmentations produced using our algorithm were manually rated by an independent blinded rater and have reached an equivalent rating score of 5.95 ± 0.87 compared to a rating score of 6.23 ± 0.87 for the first rater’s segmentation and 5.92 ± 0.81 for the second rater’s manual segmentation. Taken together, these results support the success of our tool in producing automatic fast (3–5 min per subject) and reliable segmentations of the OB, with virtually matching accuracy with the current gold standard technique for OB segmentation. In conclusion, we present a newly developed ready-to-use tool that can perform the segmentation of OBs based on multimodal data consisting of T1 whole-brain images and T2 coronal high-resolution images. The accuracy of the segmentations predicted by the algorithm matches the manual segmentations made by two well-experienced raters. This method holds potential for immediate implementation in clinical practice. Furthermore, its ability to perform quick and accurate processing of large datasets may provide a valuable contribution to advancing our knowledge of the olfactory system, in health and disease. Specifically, our framework may integrate the use of olfactory bulb volume (OBV) measurements for the diagnosis and treatment of olfactory loss and improve the prognosis and treatment options of olfactory dysfunctions.
APA, Harvard, Vancouver, ISO, and other styles
50

Ahmad, Iftikhar, Sami ur Rehman, Imran Ullah Khan, Arfa Ali, Hussain Rahman, Sadeeq Jan, and Zahid Wadud. "A Hybrid Approach for Automatic Aorta Segmentation in Abdominal 3D CT Scan Images." Journal of Medical Imaging and Health Informatics 11, no. 3 (March 1, 2021): 712–19. http://dx.doi.org/10.1166/jmihi.2021.3364.

Full text
Abstract:
Due to rapid advancement in medical imaging, human anatomy is now observable in finer details bringing new dimensions to diagnosis and treatment. One such area which benefitted from advancement in medical imaging is aorta segmentation. Aorta segmentation is achieved by using anatomical features (shape and position of aorta) using specialized segmentation algorithms. These segmentation algorithms are broadly classified into two categories. The first type comprises of fast algorithms which exploits spatial and intensity properties of images. The second type are iterative algorithms which use optimization of a cost function to track aorta boundaries. Fast algorithms offer lower computation cost, whereas iterative algorithms offer better segmentation accuracy. Therefore, there is a tradeoff between segmentation accuracy and computational cost. In this work, a hybrid approach for aorta segmentation in 3D Computed Tomography (CT) scan images is proposed. The proposed approach produces high segmentation accuracy of intensity based (fast) approaches at reduced computational cost. The proposed technique is evaluated using real world 3D abdominal CT scan images. The proposed approach can either be used as a fast-standalone segmentation procedure, or as a pre-segmentation procedure for iterative and more accurate approaches.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography