Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Ground segmentation.

Zeitschriftenartikel zum Thema „Ground segmentation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Ground segmentation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Aguiar, P. M. Q., und J. M. F. Moura. „Figure-ground segmentation from occlusion“. IEEE Transactions on Image Processing 14, Nr. 8 (August 2005): 1109–24. http://dx.doi.org/10.1109/tip.2005.851712.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kleinschmidt, A., C. Büchel, C. Hutton und R. S. J. Frackowiak. „Hysteresis Effects in Figure-Ground Segmentation“. NeuroImage 7, Nr. 4 (Mai 1998): S356. http://dx.doi.org/10.1016/s1053-8119(18)31189-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Herzog, Michael H., Sabine Kopmann und Andreas Brand. „Intact figure-ground segmentation in schizophrenia“. Psychiatry Research 129, Nr. 1 (November 2004): 55–63. http://dx.doi.org/10.1016/j.psychres.2004.06.008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Milella, Annalisa, Giulio Reina, James Underwood und Bertrand Douillard. „Visual ground segmentation by radar supervision“. Robotics and Autonomous Systems 62, Nr. 5 (Mai 2014): 696–706. http://dx.doi.org/10.1016/j.robot.2012.10.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shen, Huiying, James Coughlan und Volodymyr Ivanchenko. „Figure-ground segmentation using factor graphs“. Image and Vision Computing 27, Nr. 7 (Juni 2009): 854–63. http://dx.doi.org/10.1016/j.imavis.2009.02.006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

van der Putten, Joost, Fons van der Sommen, Jeroen de Groof, Maarten Struyvenberg, Svitlana Zinger, Wouter Curvers, Erik Schoon, Jacques Bergman und Peter H. N. de With. „Modeling clinical assessor intervariability using deep hypersphere encoder–decoder networks“. Neural Computing and Applications 32, Nr. 14 (21.11.2019): 10705–17. http://dx.doi.org/10.1007/s00521-019-04607-w.

Der volle Inhalt der Quelle
Annotation:
AbstractIn medical imaging, a proper gold-standard ground truth as, e.g., annotated segmentations by assessors or experts is lacking or only scarcely available and suffers from large intervariability in those segmentations. Most state-of-the-art segmentation models do not take inter-observer variability into account and are fully deterministic in nature. In this work, we propose hypersphere encoder–decoder networks in combination with dynamic leaky ReLUs, as a new method to explicitly incorporate inter-observer variability into a segmentation model. With this model, we can then generate multiple proposals based on the inter-observer agreement. As a result, the output segmentations of the proposed model can be tuned to typical margins inherent to the ambiguity in the data. For experimental validation, we provide a proof of concept on a toy data set as well as show improved segmentation results on two medical data sets. The proposed method has several advantages over current state-of-the-art segmentation models such as interpretability in the uncertainty of segmentation borders. Experiments with a medical localization problem show that it offers improved biopsy localizations, which are on average 12% closer to the optimal biopsy location.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ying Yang, Michael, und Bodo Rosenhahn. „SUPERPIXEL CUT FOR FIGURE-GROUND IMAGE SEGMENTATION“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (06.06.2016): 387–94. http://dx.doi.org/10.5194/isprsannals-iii-3-387-2016.

Der volle Inhalt der Quelle
Annotation:
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ying Yang, Michael, und Bodo Rosenhahn. „SUPERPIXEL CUT FOR FIGURE-GROUND IMAGE SEGMENTATION“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (06.06.2016): 387–94. http://dx.doi.org/10.5194/isprs-annals-iii-3-387-2016.

Der volle Inhalt der Quelle
Annotation:
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Warfield, Simon K., Kelly H. Zou und William M. Wells. „Validation of image segmentation by estimating rater bias and variance“. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, Nr. 1874 (11.04.2008): 2361–75. http://dx.doi.org/10.1098/rsta.2008.0040.

Der volle Inhalt der Quelle
Annotation:
The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a ‘ground truth’ or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare with segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically, these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labelling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, among others, surface, distance transform or level-set representations of segmentations, and can be used to assess whether or not a rater consistently overestimates or underestimates the position of a boundary.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kimchi, Ruth, und Mary A. Peterson. „Figure-Ground Segmentation Can Occur Without Attention“. Psychological Science 19, Nr. 7 (Juli 2008): 660–68. http://dx.doi.org/10.1111/j.1467-9280.2008.02140.x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Kimchi, R., und M. A. Peterson. „Figure-ground segmentation can occur without attention“. Journal of Vision 8, Nr. 6 (02.04.2010): 825. http://dx.doi.org/10.1167/8.6.825.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Chu, Phuong Minh, Seoungjae Cho, Kaisi Huang und Kyungeun Cho. „Flood-fill-based object segmentation and tracking for intelligent vehicles“. International Journal of Advanced Robotic Systems 16, Nr. 6 (01.11.2019): 172988141988520. http://dx.doi.org/10.1177/1729881419885206.

Der volle Inhalt der Quelle
Annotation:
In this article, an application for object segmentation and tracking for intelligent vehicles is presented. The proposed object segmentation and tracking method is implemented by combining three stages in each frame. First, based on our previous research on a fast ground segmentation method, the present approach segments three-dimensional point clouds into ground and non-ground points. The ground segmentation is important for clustering each object in subsequent steps. From the non-ground parts, we continue to segment objects using a flood-fill algorithm in the second stage. Finally, object tracking is implemented to determine the same objects over time in the final stage. This stage is performed based on likelihood probability calculated using features of each object. Experimental results demonstrate that the proposed system shows effective, real-time performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Liu, Xin, Jing Wen Xu, Jun Fang Zhao, Li Yong und Zhan Xin. „The Comparison of Segmentation Results for High-Resolution Remote Sensing Image between eCognition and EDISON“. Applied Mechanics and Materials 713-715 (Januar 2015): 373–76. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.373.

Der volle Inhalt der Quelle
Annotation:
For the point of high-resolution remote sensing image segmentation, this paper compared the segmentation effect between eCognition and EDISON through adjusting appropriate parameters. The experiment show that eCognition plays better than that of EDISON in segmenting more complex ground objects, while EDISON plays better in segmentation more uniform ground objects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Stegmaier, Johannes, Nico Peter, Julia Portl, Ira V. Mang, Rasmus Schröder, Heike Leitte, Ralf Mikut und Markus Reischl. „A framework for feedback-based segmentation of 3D image stacks“. Current Directions in Biomedical Engineering 2, Nr. 1 (01.09.2016): 437–41. http://dx.doi.org/10.1515/cdbme-2016-0097.

Der volle Inhalt der Quelle
Annotation:
Abstract3D segmentation has become a widely used technique. However, automatic segmentation does not deliver high accuracy in optically dense images and manual segmentation lowers the throughput drastically. Therefore, we present a workflow for 3D segmentation being able to forecast segments based on a user-given ground truth. We provide the possibility to correct wrong forecasts and to repeatedly insert ground truth in the process. Our aim is to combine automated and manual segmentation and therefore to improve accuracy by a tunable amount of manual input.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Boerner, R., L. Hoegner und U. Stilla. „VOXEL BASED SEGMENTATION OF LARGE AIRBORNE TOPOBATHYMETRIC LIDAR DATA“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (31.05.2017): 107–14. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-107-2017.

Der volle Inhalt der Quelle
Annotation:
Point cloud segmentation and classification is currently a research highlight. Methods in this field create labelled data, where each point has additional class information. Current approaches are to generate a graph on the basis of all points in the point cloud, calculate or learn descriptors and train a matcher for the descriptor to the corresponding classes. Since these approaches need to look on each point in the point cloud iteratively, they result in long calculation times for large point clouds. Therefore, large point clouds need a generalization, to save computation time. One kind of generalization is to cluster the raw points into a 3D grid structure, which is represented by small volume units ( i.e. voxels) used for further processing. This paper introduces a method to use such a voxel structure to cluster a large point cloud into ground and non-ground points. The proposed method for ground detection first marks ground voxels with a region growing approach. In a second step non ground voxels are searched and filtered in the ground segment to reduce effects of over-segmentations. This filter uses the probability that a voxel mostly consist of last pulses and a discrete gradient in a local neighbourhood . The result is the ground label as a first classification result and connected segments of non-ground points. The test area of the river Mangfall in Bavaria, Germany, is used for the first processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Xiong, Lu, Yongkun Wen, Yuyao Huang, Junqiao Zhao und Wei Tian. „Joint Unsupervised Learning of Depth, Pose, Ground Normal Vector and Ground Segmentation by a Monocular Camera Sensor“. Sensors 20, Nr. 13 (03.07.2020): 3737. http://dx.doi.org/10.3390/s20133737.

Der volle Inhalt der Quelle
Annotation:
We propose a completely unsupervised approach to simultaneously estimate scene depth, ego-pose, ground segmentation and ground normal vector from only monocular RGB video sequences. In our approach, estimation for different scene structures can mutually benefit each other by the joint optimization. Specifically, we use the mutual information loss to pre-train the ground segmentation network and before adding the corresponding self-learning label obtained by a geometric method. By using the static nature of the ground and its normal vector, the scene depth and ego-motion can be efficiently learned by the self-supervised learning procedure. Extensive experimental results on both Cityscapes and KITTI benchmark demonstrate the significant improvement on the estimation accuracy for both scene depth and ego-pose by our approach. We also achieve an average error of about 3 ∘ for estimated ground normal vectors. By deploying our proposed geometric constraints, the IOU accuracy of unsupervised ground segmentation is increased by 35% on the Cityscapes dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ralph, Brandon C. W., Paul Seli, Vivian O. Y. Cheng, Grayden J. F. Solman und Daniel Smilek. „Running the figure to the ground: Figure-ground segmentation during visual search“. Vision Research 97 (April 2014): 65–73. http://dx.doi.org/10.1016/j.visres.2014.02.005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Francis, Gregory. „Cortical dynamics of figure-ground segmentation: Shine-through“. Vision Research 49, Nr. 1 (Januar 2009): 140–63. http://dx.doi.org/10.1016/j.visres.2008.10.002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Elleuch, Jihen Frikha, Imen Khanfir Kallel und Dorra Sellami Masmoudi. „New Ground Plane Segmentation Method for Electronic Cane“. Journal of Image and Graphics 1, Nr. 2 (2013): 72–75. http://dx.doi.org/10.12720/joig.1.2.72-75.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wenbin Zou, Cong Bai, Kidiyo Kpalma und Joseph Ronsin. „Online Glocal Transfer for Automatic Figure-Ground Segmentation“. IEEE Transactions on Image Processing 23, Nr. 5 (Mai 2014): 2109–21. http://dx.doi.org/10.1109/tip.2014.2312287.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Norcia, A. „Imaging the time-course of Figure-Ground segmentation“. Journal of Vision 7, Nr. 15 (28.03.2010): 13. http://dx.doi.org/10.1167/7.15.13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Francis, G. „Cortical dynamics of figure-ground segmentation: Shine through“. Journal of Vision 7, Nr. 15 (28.03.2010): 63. http://dx.doi.org/10.1167/7.15.63.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Francis, G. „Cortical dynamics of figure-ground segmentation: Shine-through“. Journal of Vision 8, Nr. 6 (02.04.2010): 828. http://dx.doi.org/10.1167/8.6.828.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yazdani, S., R. Yusof, A. Karimian, A. H. Riazi und M. Bennamoun. „A Unified Framework for Brain Segmentation in MR Images“. Computational and Mathematical Methods in Medicine 2015 (2015): 1–17. http://dx.doi.org/10.1155/2015/829893.

Der volle Inhalt der Quelle
Annotation:
Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Tam, Lydia, Edward Lee, Michelle Han, Jason Wright, Leo Chen, Jenn Quon, Robert Lober et al. „IMG-22. A DEEP LEARNING MODEL FOR AUTOMATIC POSTERIOR FOSSA PEDIATRIC BRAIN TUMOR SEGMENTATION: A MULTI-INSTITUTIONAL STUDY“. Neuro-Oncology 22, Supplement_3 (01.12.2020): iii359. http://dx.doi.org/10.1093/neuonc/noaa222.357.

Der volle Inhalt der Quelle
Annotation:
Abstract BACKGROUND Brain tumors are the most common solid malignancies in childhood, many of which develop in the posterior fossa (PF). Manual tumor measurements are frequently required to optimize registration into surgical navigation systems or for surveillance of nonresectable tumors after therapy. With recent advances in artificial intelligence (AI), automated MRI-based tumor segmentation is now feasible without requiring manual measurements. Our goal was to create a deep learning model for automated PF tumor segmentation that can register into navigation systems and provide volume output. METHODS 720 pre-surgical MRI scans from five pediatric centers were divided into training, validation, and testing datasets. The study cohort comprised of four PF tumor types: medulloblastoma, diffuse midline glioma, ependymoma, and brainstem or cerebellar pilocytic astrocytoma. Manual segmentation of the tumors by an attending neuroradiologist served as “ground truth” labels for model training and evaluation. We used 2D Unet, an encoder-decoder convolutional neural network architecture, with a pre-trained ResNet50 encoder. We assessed ventricle segmentation accuracy on a held-out test set using Dice similarity coefficient (0–1) and compared ventricular volume calculation between manual and model-derived segmentations using linear regression. RESULTS Compared to the ground truth expert human segmentation, overall Dice score for model performance accuracy was 0.83 for automatic delineation of the 4 tumor types. CONCLUSIONS In this multi-institutional study, we present a deep learning algorithm that automatically delineates PF tumors and outputs volumetric information. Our results demonstrate applied AI that is clinically applicable, potentially augmenting radiologists, neuro-oncologists, and neurosurgeons for tumor evaluation, surveillance, and surgical planning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Cho, Seoungjae, Jonghyun Kim, Warda Ikram, Kyungeun Cho, Young-Sik Jeong, Kyhyun Um und Sungdae Sim. „Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud“. Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/582753.

Der volle Inhalt der Quelle
Annotation:
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Zhao, Yawei, Yanju Liu, Yang Yu und Jiawei Zhou. „Ground Segmentation Algorithm of Lidar Point Cloud Based on Ray-Ransac“. International Journal of Circuits, Systems and Signal Processing 15 (12.08.2021): 970–77. http://dx.doi.org/10.46300/9106.2021.15.104.

Der volle Inhalt der Quelle
Annotation:
Aiming at the problems of poor segmentation effect, low efficiency and poor robustness of the Ransac ground segmentation algorithm, this paper proposes a radar segmentation algorithm based on Ray-Ransac. This algorithm combines the structural characteristics of three-dimensional lidar and uses ray segmentation to generate the original seed point set. The random sampling of Ransac algorithm is limited to the original seed point set, which reduces the probability that Ransac algorithm extracts outliers and reduces the calculation. The Ransac algorithm is used to modify the ground model parameters so that the algorithm can adapt to the undulating roads. The standard deviation of the distance from the point to the plane model is used as the distance threshold, and the allowable error range of the actual point cloud data is considered to effectively eliminate the abnormal points and error points. The algorithm was tested on the simulation platform and the test vehicle. The experimental results show that the lidar point cloud ground segmentation algorithm proposed in this paper takes an average of 5.784 milliseconds per frame, which has fast speed and good precision. It can adapt to uneven road surface and has high robustness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Chen, Ding-Jie, Jui-Ting Chien, Hwann-Tzong Chen und Tyng-Luh Liu. „Unsupervised Meta-Learning of Figure-Ground Segmentation via Imitating Visual Effects“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8159–66. http://dx.doi.org/10.1609/aaai.v33i01.33018159.

Der volle Inhalt der Quelle
Annotation:
This paper presents a “learning to learn” approach to figureground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Li, Jianzhang, Sven Nebelung, Björn Rath, Markus Tingart und Jörg Eschweiler. „A novel combined level set model for automatic MR image segmentation“. Current Directions in Biomedical Engineering 6, Nr. 3 (01.09.2020): 20–23. http://dx.doi.org/10.1515/cdbme-2020-3006.

Der volle Inhalt der Quelle
Annotation:
AbstractMedical image processing comes along with object segmentation, which is one of the most important tasks in that field. Nevertheless, noise and intensity inhomogeneity in magnetic resonance images challenge the segmentation procedure. The level set method has been widely used in object detection. The flexible integration of energy terms affords the level set method to deal with variable difficulties. In this paper, we introduce a novel combined level set model that mainly cooperates with an edge detector and a local region intensity descriptor. The noise and intensity inhomogeneities are eliminated by the local region intensity descriptor. The edge detector helps the level set model to locate the object boundaries more precisely. The proposed model was validated on synthesized images and magnetic resonance images of in vivo wrist bones. Comparing with the ground truth, the proposed method reached a Dice similarity coefficient of > 0.99 on all image tests, while the compared segmentation approaches failed the segmentations. The presented combined level set model can be used for the object segmentation in magnetic resonance images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Takhtawala, Ruquaiyah, Nataly Tapia Negrete, Madeleine Shaver, Turkay Kart, Yang Zhang, Vivian Youngjean Park, Min Jung Kim, Min-Ying Su, Daniel S. Chow und Peter Chang. „Automated artificial intelligence quantification of fibroglandular tissue on breast MRI.“ Journal of Clinical Oncology 37, Nr. 15_suppl (20.05.2019): e12071-e12071. http://dx.doi.org/10.1200/jco.2019.37.15_suppl.e12071.

Der volle Inhalt der Quelle
Annotation:
e12071 Background: The objective of this study is to examine if a convolutional neural network can be utilized to automate breast fibroglandular tissue segmentation, a risk factor for breast cancer, on MRIs. Methods: This institutional review board approved study assessed retrospectively acquired MRI T1 pre-contrast image data for 238 patients. Ground truth parameters were derived through manual segmentation. A hybrid 3D/2D U-Net architecture was developed for fibroglandular tissue segmentation. The network was trained with T1 pre-contrast MRI data and their corresponding ground-truth labels. The analysis was started with image pre-processing. Each MRI volume was re-sampled and normalized using z-scores. Convolution operations reduced 3D volumes into a 2D slice in the contracting arm of the U-Net architecture. Results: A 5-fold cross validation was performed and the Dice similarity coefficient was used to assess the accuracy of fibroglandular tissue segmentation. Cross-validation results showed that the automated hybrid CNN approach resulted in a Dice similarity coefficient of 0.848 and a Pearson correlation of 0.961 in comparison to the ground-truth for fibroglandular breast tissue segmentation, which demonstrates high accuracy. Conclusions: The results demonstrate significant application of deep learning in accurately automating segmentation of breast fibroglandular tissue.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Kushwaha, S. K. P., K. R. Dayal, A. Singh und K. Jain. „BUILDING FACADE AND ROOFTOP SEGMENTATION BY NORMAL ESTIMATION FROM UAV DERIVED RGB POINT CLOUD“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (29.11.2019): 173–77. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-173-2019.

Der volle Inhalt der Quelle
Annotation:
Abstract. Point cloud segmentation is a significant process to organise an unstructured point cloud. In this study, RGB point cloud was generated with the help of images acquired from an Unmanned Aerial Vehicle (UAV). A dense urban area was considered with varying planar features in the built-up environment along with buildings with different floors. Initially, using Cloth Simulation Filter (CSF) filter, the ground and the non-ground features in the Point Cloud Data (PCD) were segmented, with non-ground features comprising trees and buildings and ground features comprising roads, ground vegetation, and open land. Subsequently, using CANUPO classifier the trees and building points were classified. Noise filtering removed the points which have less density in clusters. Point cloud normals were generated for the building points. For segmentation building elements, normal vector components in different directions (X component, Y component and Z component) were used to segment out the facade, and the roof points of the buildings as the surface normals corresponding to the roof will have a higher contribution in the z component of the normal vector. The validation of the segmentation is done by comparing the results with manually identified roof points and façade points in the point cloud. Overall accuracies obtained for building roof and building facade segmentation are 90.86 % and 84.83 % respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sáiz-Rubio, V., und F. Rovira-Más. „Dynamic segmentation to estimate vine vigor from ground images“. Spanish Journal of Agricultural Research 10, Nr. 3 (01.08.2012): 596. http://dx.doi.org/10.5424/sjar/2012103-508-11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Jiong Li, 李炯, 赵凯 Kai Zhao, 白睿 Rui Bai, 朱愿 Yuan Zhu und 徐友春 Youchun Xu. „Urban Ground Segmentation Algorithm Based on Ray Slope Threshold“. Acta Optica Sinica 39, Nr. 9 (2019): 0928004. http://dx.doi.org/10.3788/aos201939.0928004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Jun-long, Fang, Zhang Dong und Qiao Yi-bo. „Back Ground Segmentation of Cucumber Target Based on DSP“. Journal of Northeast Agricultural University (English Edition) 20, Nr. 3 (September 2013): 78–82. http://dx.doi.org/10.1016/s1006-8104(14)60012-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Casco, C., D. Guzzon und G. Campana. „Sleep enables explicit figure-ground segmentation of unattended textures“. Journal of Vision 8, Nr. 6 (20.03.2010): 1128. http://dx.doi.org/10.1167/8.6.1128.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Baylis, G. „Deficit in figure-ground segmentation following closed head injury“. Neuropsychologia 35, Nr. 8 (August 1997): 1133–38. http://dx.doi.org/10.1016/s0028-3932(97)00013-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Li, Yang. „Figure-ground segmentation based on class-independent shape priors“. Journal of Electronic Imaging 27, Nr. 01 (14.02.2018): 1. http://dx.doi.org/10.1117/1.jei.27.1.013018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Rajasekaran, Bhavna, Koichiro Uriu, Guillaume Valentin, Jean-Yves Tinevez und Andrew C. Oates. „Object Segmentation and Ground Truth in 3D Embryonic Imaging“. PLOS ONE 11, Nr. 6 (22.06.2016): e0150853. http://dx.doi.org/10.1371/journal.pone.0150853.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Likova, L. T., und C. W. Tyler. „Structure-from-transients: HMT/MST mediates figure/ground segmentation“. Journal of Vision 5, Nr. 8 (01.09.2005): 1059. http://dx.doi.org/10.1167/5.8.1059.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Huo, Yuankai, Zhoubing Xu, Hyeonsoo Moon, Shunxing Bao, Albert Assad, Tamara K. Moyo, Michael R. Savona, Richard G. Abramson und Bennett A. Landman. „SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth“. IEEE Transactions on Medical Imaging 38, Nr. 4 (April 2019): 1016–25. http://dx.doi.org/10.1109/tmi.2018.2876633.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Conci, M., und A. von Muhlenen. „Figure-ground segmentation determines contextual learning in visual search“. Journal of Vision 9, Nr. 8 (03.09.2010): 926. http://dx.doi.org/10.1167/9.8.926.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Bravo, Mary J., und Randolph Blake. „The contributions of figure and ground textures to segmentation“. Vision Research 32, Nr. 9 (September 1992): 1793–800. http://dx.doi.org/10.1016/0042-6989(92)90174-h.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Salman Al-Shaikhli, Saif Dawood, Michael Ying Yang und Bodo Rosenhahn. „Brain tumor classification and segmentation using sparse coding and dictionary learning“. Biomedical Engineering / Biomedizinische Technik 61, Nr. 4 (01.08.2016): 413–29. http://dx.doi.org/10.1515/bmt-2015-0071.

Der volle Inhalt der Quelle
Annotation:
AbstractThis paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Xie, Wanyi, Dong Liu, Ming Yang, Shaoqing Chen, Benge Wang, Zhenzhu Wang, Yingwei Xia, Yong Liu, Yiren Wang und Chaofang Zhang. „SegCloud: a novel cloud image segmentation model using a deep convolutional neural network for ground-based all-sky-view camera observation“. Atmospheric Measurement Techniques 13, Nr. 4 (17.04.2020): 1953–61. http://dx.doi.org/10.5194/amt-13-1953-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Cloud detection and cloud properties have substantial applications in weather forecast, signal attenuation analysis, and other cloud-related fields. Cloud image segmentation is the fundamental and important step in deriving cloud cover. However, traditional segmentation methods rely on low-level visual features of clouds and often fail to achieve satisfactory performance. Deep convolutional neural networks (CNNs) can extract high-level feature information of objects and have achieved remarkable success in many image segmentation fields. On this basis, a novel deep CNN model named SegCloud is proposed and applied for accurate cloud segmentation based on ground-based observation. Architecturally, SegCloud possesses a symmetric encoder–decoder structure. The encoder network combines low-level cloud features to form high-level, low-resolution cloud feature maps, whereas the decoder network restores the obtained high-level cloud feature maps to the same resolution of input images. The Softmax classifier finally achieves pixel-wise classification and outputs segmentation results. SegCloud has powerful cloud discrimination capability and can automatically segment whole-sky images obtained by a ground-based all-sky-view camera. The performance of SegCloud is validated by extensive experiments, which show that SegCloud is effective and accurate for ground-based cloud segmentation and achieves better results than traditional methods do. The accuracy and practicability of SegCloud are further proven by applying it to cloud cover estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Azimi, S., E. Vig, F. Kurz und P. Reinartz. „SEGMENT-AND-COUNT: VEHICLE COUNTING IN AERIAL IMAGERY USING ATROUS CONVOLUTIONAL NEURAL NETWORKS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (26.09.2018): 19–23. http://dx.doi.org/10.5194/isprs-archives-xlii-1-19-2018.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> High-resolution aerial imagery can provide detailed and in some cases even real-time information about traffic related objects. Vehicle localization and counting using aerial imagery play an important role in a broad range of applications. Recently, convolutional neural networks (CNNs) with atrous convolution layers have shown better performance for semantic segmentation compared to conventional convolutional aproaches. In this work, we propose a joint vehicle segmentation and counting method based on atrous convolutional layers. This method uses a multi-task loss function to simultaneously reduce pixel-wise segmentation and vehicle counting errors. In addition, the rectangular shapes of vehicle segmentations are refined using morphological operations. In order to evaluate the proposed methodology, we apply it to the public “DLR 3K” benchmark dataset which contains aerial images with a ground sampling distance of 13<span class="thinspace"></span>cm. Results show that our proposed method reaches 81.58<span class="thinspace"></span>% mean intersection over union in vehicle segmentation and shows an accuracy of 91.12<span class="thinspace"></span>% in vehicle counting, outperforming the baselines.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Rangkuti, Rizki Perdana, Vektor Dewanto, Aprinaldi und Wisnu Jatmiko. „Utilizing Google Images for Training Classifiers in CRF-Based Semantic Segmentation“. Journal of Advanced Computational Intelligence and Intelligent Informatics 20, Nr. 3 (19.05.2016): 455–61. http://dx.doi.org/10.20965/jaciii.2016.p0455.

Der volle Inhalt der Quelle
Annotation:
One promising approach to pixel-wise semantic segmentation is based on conditional random fields (CRFs). CRF-based semantic segmentation requires ground-truth annotations to supervisedly train the classifier that generates unary potentials. However, the number of (public) annotation data for training is limitedly small. We observe that the Internet can provide relevant images for any given keywords. Our idea is to convert keyword-related images to pixel-wise annotated images, then use them as training data. In particular, we rely on saliency filters to identify the salient object (foreground) of a retrieved image, which mostly agrees with the given keyword. We utilize saliency information for back-and-foreground CRF-based semantic segmentation to further obtain pixel-wise ground-truth annotations. Experiment results show that training data from Google images improves both the learning performance and the accuracy of semantic segmentation. This suggests that our proposed method is promising for harvesting substantial training data from the Internet for training the classifier in CRF-based semantic segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Pan, Xuran, Fan Yang, Lianru Gao, Zhengchao Chen, Bing Zhang, Hairui Fan und Jinchang Ren. „Building Extraction from High-Resolution Aerial Imagery Using a Generative Adversarial Network with Spatial and Channel Attention Mechanisms“. Remote Sensing 11, Nr. 8 (15.04.2019): 917. http://dx.doi.org/10.3390/rs11080917.

Der volle Inhalt der Quelle
Annotation:
Segmentation of high-resolution remote sensing images is an important challenge with wide practical applications. The increasing spatial resolution provides fine details for image segmentation but also incurs segmentation ambiguities. In this paper, we propose a generative adversarial network with spatial and channel attention mechanisms (GAN-SCA) for the robust segmentation of buildings in remote sensing images. The segmentation network (generator) of the proposed framework is composed of the well-known semantic segmentation architecture (U-Net) and the spatial and channel attention mechanisms (SCA). The adoption of SCA enables the segmentation network to selectively enhance more useful features in specific positions and channels and enables improved results closer to the ground truth. The discriminator is an adversarial network with channel attention mechanisms that can properly discriminate the outputs of the generator and the ground truth maps. The segmentation network and adversarial network are trained in an alternating fashion on the Inria aerial image labeling dataset and Massachusetts buildings dataset. Experimental results show that the proposed GAN-SCA achieves a higher score (the overall accuracy and intersection over the union of Inria aerial image labeling dataset are 96.61% and 77.75%, respectively, and the F1-measure of the Massachusetts buildings dataset is 96.36%) and outperforms several state-of-the-art approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Sinchuk, Yuriy, Pierre Kibleur, Jan Aelterman, Matthieu N. Boone und Wim Van Paepegem. „Variational and Deep Learning Segmentation of Very-Low-Contrast X-ray Computed Tomography Images of Carbon/Epoxy Woven Composites“. Materials 13, Nr. 4 (20.02.2020): 936. http://dx.doi.org/10.3390/ma13040936.

Der volle Inhalt der Quelle
Annotation:
The purpose of this work is to find an effective image segmentation method for lab-based micro-tomography (µ-CT) data of carbon fiber reinforced polymers (CFRP) with insufficient contrast-to-noise ratio. The segmentation is the first step in creating a realistic geometry (based on µ-CT) for finite element modelling of textile composites on meso-scale. Noise in X-ray imaging data of carbon/polymer composites forms a challenge for this segmentation due to the very low X-ray contrast between fiber and polymer and unclear fiber gradients. To the best of our knowledge, segmentation of µ-CT images of carbon/polymer textile composites with low resolution data (voxel size close to the fiber diameter) remains poorly documented. In this paper, we propose and evaluate different approaches for solving the segmentation problem: variational on the one hand and deep-learning-based on the other. In the author’s view, both strategies present a novel and reliable ground for the segmentation of µ-CT data of CFRP woven composites. The predictions of both approaches were evaluated against a manual segmentation of the volume, constituting our “ground truth”, which provides quantitative data on the segmentation accuracy. The highest segmentation accuracy (about 4.7% in terms of voxel-wise Dice similarity) was achieved using the deep learning approach with U-Net neural network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Arafati, Arghavan, Daisuke Morisawa, Michael R. Avendi, M. Reza Amini, Ramin A. Assadi, Hamid Jafarkhani und Arash Kheradvar. „Generalizable fully automated multi-label segmentation of four-chamber view echocardiograms based on deep convolutional adversarial networks“. Journal of The Royal Society Interface 17, Nr. 169 (August 2020): 20200267. http://dx.doi.org/10.1098/rsif.2020.0267.

Der volle Inhalt der Quelle
Annotation:
A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers’ reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chen, Baifan, Hong Chen, Dian Yuan und Lingli Yu. „3D Fast Object Detection Based on Discriminant Images and Dynamic Distance Threshold Clustering“. Sensors 20, Nr. 24 (17.12.2020): 7221. http://dx.doi.org/10.3390/s20247221.

Der volle Inhalt der Quelle
Annotation:
The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie