Dissertations / Theses on the topic 'Segmentation accuracy'

To see the other types of publications on this topic, follow the link: Segmentation accuracy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Segmentation accuracy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Fan. "Brain perfusion imaging : performance and accuracy." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8848.

Full text
Abstract:
Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. The purpose of my PhD research is to develop novel methodologies for improving the efficiency and quality of brain perfusion-imaging analysis so that clinical decisions can be made more accurately and in a shorter time. This thesis consists of three parts: My research investigates the possibility that parallel computing brings to make perfusion-imaging analysis faster in order to deliver results that are used in stroke diagnosis earlier. Brain perfusion analysis using local Arterial Input Functions (AIF) techniques takes a long time to execute due to its heavy computational load. As time is vitally important in the case of acute stroke, reducing analysis time and therefore diagnosis time can reduce the number of brain cells damaged and improve the chances for patient recovery. We present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose computing on Graphics Processing Units) using the CUDA programming model. Our method aims to accelerate the process without any quality loss. Specific features of perfusion source images are also used to reduce noise impact, which consequently improves the accuracy of hemodynamic maps. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR) makes use of the temporal information in the perfusion source imges to reduce the noise level. Over the entire image, our noise reduction method based on Gaussian process regression gains a 99% contrast-to-noise ratio improvement over the raw image and also improves the quality of hemodynamic maps, allowing a better identification of edges and detailed information. At the level of individual voxels, GPR provides a stable baseline, helps identify key parameters from tissue time-concentration curves and reduces the oscillations in the curves. Furthermore, the results show that GPR is superior to the alternative techniques compared in this study. My research also explores automatic segmentation of perfusion images into potentially healthy areas and lesion areas, which can be used as additional information that assists in clinical diagnosis. Since perfusion source images contain more information than hemodynamic maps, good utilisation of source images leads to better understanding than the hemodynamic maps alone. Correlation coefficient tests are used to measure the similarities between the expected tissue time-concentration curves (from reference tissue) and the measured time-concentration curves (from target tissue). This information is then used to distinguish tissues at risk and dead tissues from healthy tissues. A correlation coefficient based signal analysis method that directly spots suspected lesion areas from perfusion source images is presented. Our method delivers a clear automatic segmentation of healthy tissue, tissue at risk and dead tissue. From our segmentation maps, it is easier to identify lesion boundaries than using traditional hemodynamic maps.
APA, Harvard, Vancouver, ISO, and other styles
2

Kraljevic, Matija. "Character recognition in natural images : Testing the accuracy of OCR and potential improvement by image segmentation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187991.

Full text
Abstract:
In recent years, reading text from natural images has gained renewed research attention. One of the main reasons for this is the rapid growth of camera-based applications on smart phones and other portable devices. With the increasing availability of high performance, low-priced, image-capturing devices, the application of scene text recognition is rapidly expanding and becoming increasingly popular. Despite many efforts, character recognition in natural images, is still considered a challenging and unresolved problem. The difficulties stem from the fact that natural images suffer from a wide variety of obstacles such as complex backgrounds, font variation, uneven illumination, resolution problems, occlusions, perspective effects, just to mention a few. This paper aims to test the accuracy of OCR in character recognition of natural images as well as testing the possible improvement in accuracy after implementing three different segmentation methods.The results showed that the accuracy of OCR was very poor and no improvments in accuracy were found after implementing the chosen segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Ghattas, Andrew Emile. "Medical imaging segmentation assessment via Bayesian approaches to fusion, accuracy and variability estimation with application to head and neck cancer." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5759.

Full text
Abstract:
With the advancement of technology, medical imaging has become a fast growing area of research. Some imaging questions require little physician analysis, such as diagnosing a broken bone, using a 2-D X-ray image. More complicated questions, using 3-D scans, such as computerized tomography (CT), can be much more difficult to answer. For example, estimating tumor growth to evaluate malignancy; which informs whether intervention is necessary. This requires careful delineation of different structures in the image. For example, what is the tumor versus what is normal tissue; this is referred to as segmentation. Currently, the gold standard of segmentation is for a radiologist to manually trace structure edges in the 3-D image, however, this can be extremely time consuming. Additionally, manual segmentation results can differ drastically between and even within radiologists. A more reproducible, less variable, and more time efficient segmentation approach would drastically improve medical treatment. This potential, as well as the continued increase in computing power, has led to computationally intensive semiautomated segmentation algorithms. Segmentation algorithms' widespread use is limited due to difficulty in validating their performance. Fusion models, such as STAPLE, have been proposed as a way to combine multiple segmentations into a consensus ground truth; this allows for evaluation of both manual and semiautomated segmentation in relation to the consensus ground truth. Once a consensus ground truth is obtained, a multitude of approaches have been proposed for evaluating different aspects of segmentation performance; segmentation accuracy, between- and within -reader variability. The focus of this dissertation is threefold. First, a simulation based tool is introduced to allow for the validation of fusion models. The simulation properties closely follow a real dataset, in order to ensure that they mimic reality. Second, a statistical hierarchical Bayesian fusion model is proposed, in order to estimate a consensus ground truth within a robust statistical framework. The model is validated using the simulation tool and compared to other fusion models, including STAPLE. Additionally, the model is applied to real datasets and the consensus ground truth estimates are compared across different fusion models. Third, a statistical hierarchical Bayesian performance model is proposed in order to estimate segmentation method specific accuracy, between- and within -reader variability. An extensive simulation study is performed to validate the model’s parameter estimation and coverage properties. Additionally, the model is fit to a real data source and performance estimates are summarized.
APA, Harvard, Vancouver, ISO, and other styles
4

Porter, Sarah Ann. "Land cover study in Iowa: analysis of classification methodology and its impact on scale, accuracy, and landscape metrics." Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/1169.

Full text
Abstract:
For landscapes dominated by agriculture, land cover plays an important role in the balance between anthropogenic and natural forces. Therefore, the objective of this thesis is to describe two different methodologies that have been implemented to create high-resolution land cover classifications in a dominant agricultural landscape. First, an object-based segmentation approach will be presented, which was applied to historic, high resolution, panchromatic aerial photography. Second, a traditional per-pixel technique was applied to multi-temporal, multispectral, high resolution aerial photography, in combination with light detection and ranging (LIDAR) and independent component analysis (ICA). A critical analysis of each approach will be discussed in detail, as well as the ability of each methodology to generate landscape metrics that can accurately characterize the quality of the landscape. This will be done through the comparison of various landscape metrics derived from the different classifications approaches, with a goal of enhancing the literature concerning how these metrics vary across methodologies and across scales. This is a familiar problem encountered when analyzing land cover datasets over time, which are often at different scales or generated using different methodologies. The diversity of remotely sensed imagery, including varying spatial resolutions, landscapes, and extents, as well as the wide range of spatial metrics that can be created, has generated concern about the integrity of these metrics when used to make inferences about landscape quality. Finally, inferences will be made about land cover and land cover change dynamics for the state of Iowa based on insight gained throughout the process.
APA, Harvard, Vancouver, ISO, and other styles
5

Shrestha, Ujjwal. "Automatic Liver and Tumor Segmentation from CT Scan Images using Gabor Feature and Machine Learning Algorithms." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1522411364001198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Burgada, Muñoz Santiago. "Improvement on the sales forecast accuracy for a fast growing company by the best combination of historical data usage and clients segmentation." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/13322.

Full text
Abstract:
Submitted by SANTIAGO BURGADA (sburgada@maxam.net) on 2015-01-25T12:10:08Z No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2015-02-04T19:27:15Z (GMT) No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5)
Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-02-11T13:27:32Z (GMT) No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5)
Made available in DSpace on 2015-02-11T13:34:18Z (GMT). No. of bitstreams: 1 DISSERTATION SANTIAGO BURGADA CORPORATE INTERNATIONAL MASTERS SUBMISSION VERSION.pdf: 3588309 bytes, checksum: b70385fd690a43ddea32379f34b4afe9 (MD5) Previous issue date: 2014-10-29
Industrial companies in developing countries are facing rapid growths, and this requires having in place the best organizational processes to cope with the market demand. Sales forecasting, as a tool aligned with the general strategy of the company, needs to be as much accurate as possible, in order to achieve the sales targets by making available the right information for purchasing, planning and control of production areas, and finally attending in time and form the demand generated. The present dissertation uses a single case study from the subsidiary of an international explosives company based in Brazil, Maxam, experiencing high growth in sales, and therefore facing the challenge to adequate its structure and processes properly for the rapid growth expected. Diverse sales forecast techniques have been analyzed to compare the actual monthly sales forecast, based on the sales force representatives’ market knowledge, with forecasts based on the analysis of historical sales data. The dissertation findings show how the combination of both qualitative and quantitative forecasts, by the creation of a combined forecast that considers both client´s demand knowledge from the sales workforce with time series analysis, leads to the improvement on the accuracy of the company´s sales forecast.
APA, Harvard, Vancouver, ISO, and other styles
7

Hast, Isak, and Asmelash Mehari. "Automating Geographic Object-Based Image Analysis and Assessing the Methods Transferability : A Case Study Using High Resolution Geografiska SverigedataTM Orthophotos." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-22570.

Full text
Abstract:
Geographic object-based image analysis (GEOBIA) is an innovative image classification technique that treats spatial features in an image as objects, rather than as pixels; thus resembling closer to that of human perception of the geographic space. However, the process of a GEOBIA application allows for multiple interpretations. Particularly sensitive parts of the process include image segmentation and training data selection. The multiresolution segmentation algorithm (MSA) is commonly applied. The performance of segmentation depends primarily on the algorithms scale parameter, since scale controls the size of image objects produced. The fact that the scale parameter is unit less makes it a challenge to select a suitable one; thus, leaving the analyst to a method of trial and error. This can lead to a possible bias. Additionally, part from the segmentation, training area selection usually means that the data has to be manually collected. This is not only time consuming but also prone to subjectivity. In order to overcome these challenges, we tested a GEOBIA scheme that involved automatic methods of MSA scale parameterisation and training area selection which enabled us to more objectively classify images. Three study areas within Sweden were selected. The data used was high resolution Geografiska Sverigedata (GSD) orthophotos from the Swedish mapping agency, Lantmäteriet. We objectively found scale for each classification using a previously published technique embedded as a tool in eCognition software. Based on the orthophoto inputs, the tool calculated local variance and rate of change at different scales. These figures helped us to determine scale value for the MSA segmentation. Moreover, we developed in this study a novel method for automatic training area selection. The method is based on thresholded feature statistics layers computed from the orthophoto band derivatives. Thresholds were detected by Otsu’s single and multilevel algorithms. The layers were run through a filtering process which left only those fit for use in the classification process. We also tested the transferability of classification rule-sets for two of the study areas. This test helped us to investigate the degree to which automation can be realised. In this study we have made progress toward a more objective way of object-based image classification, realised by automating the scheme. Particularly noteworthy is the algorithm for automatic training area selection proposed, which compared to manual selection restricts human intervention to a minimum. Results of the classification show overall well delineated classes, in particular, the border between open area and forest contributed by the elevation data. On the other hand, there still persists some challenges regarding separating between deciduous and coniferous forest. Furthermore, although water was accurately classified in most instances, in one of the study areas, the water class showed contradictory results between its thematic and positional accuracy; hence stressing the importance of assessing the result based on more than the thematic accuracy. From the transferability test we noted the importance of considering the spatial/spectral characteristics of an area before transferring of rule-sets as these factors are a key to determine whether a transfer is possible.
APA, Harvard, Vancouver, ISO, and other styles
8

Gauci, Marc-Olivier. "Description et classification 3D des glènes arthrosiques pour une planification préopératoire 3D assistée par ordinateur : l'épaule digitale normale et arthrosique Patient-specific glenoid guides provide accuracy and reproducibility in total shoulder arthroplasty, in The Bone & Joint Journal 98-B(8), 2016 A modification to the Walch classification of the glenoid in primary glenohumeral osteoarthritis using three-dimensional imaging, in Journal of Shoulder and Elbow Surgery 25(10), October 2016 Automated three-dimensional measurement of glenoid version and inclination in arthritic shoulders, in the Journal of Bone & Joint Surgery 100(1), January 2018 Proper benefit of a three dimensional pre-operative planning software for glenoid component positioning in total shoulder arthroplasty, in International Orthopaedics 42, 2018 The reverse shoulder arthroplasty angle: a new measurement of glenoid inclination for reverse shoulder arthroplasty, in Journal of Shoulder and Elbow Surgery 28(7), July 2019." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0091.

Full text
Abstract:
La modélisation tridimensionnelle est devenue plus accessible et plus rapide en orthopédie et en particulier en chirurgie de l’épaule. L’analyse morphométrique qui en est issue est utilisée pour permettre une meilleure compréhension de l’omarthrose. L’objectif global de cette thèse était de valider l’application d’un logiciel de segmentation automatisée tridimensionnelle dans les étapes de prise en charge du patient. Huit études ont permis de valider les mesures automatiques calculées par le logiciel, d’améliorer la classification des omarthroses primaires puis de décrire la géométrie 3D normale et pathologique de l’épaule. Des seuils numériques précis ont pu être établis entre les différents types. Le logiciel a permis de développer et valider l’utilisation d’un angle (RSA-angle) permettant de mieux positionner l’implant glénoïdien dans les prothèses inversées d’épaule. L’utilisation des mobilités simulées en 3D démontrait l’intérêt du logiciel dans la compréhension des conflits osseux après prothèse et des faiblesses de design d’implant. Enfin, le positionnement de l’implant glénoïdien en peropératoire avec un guide patient-spécifique imprimé en 3D correspondait fidèlement à sa planification préopératoire, cependant, la planification à elle seule améliorait déjà considérablement ce positionnement. Ce travail de thèse a permis de valider les performances et l’utilisation d’un logiciel de segmentation tridimensionnel et de planification préopératoire. Son application se retrouve dans plusieurs étapes de la prise en charge d’un patient atteint d’omarthrose et devrait progressivement s’intégrer dans la pratique quotidienne des chirurgiens
Three-dimensional modelling has become more accessible and faster in orthopedics and especially in shoulder surgery. The subsequent morphometric analysis is used to provide a better understanding of shoulder arthritis.The overall objective of this Thesis was to validate the use of a 3D-automated segmentation software in the various steps of patients management.Eight studies allowed validating the automatic measurements calculated by the software, improving the classification of primary shoulder arthritis and then describing the normal and pathological 3D geometry of the shoulder. Accurate numerical thresholds could be established between the different types. The software developed and validated the use of an angle (RSAangle) to better position the glenoid implant in reverse shoulder arthroplasty. The use of simulated range of motion in 3D demonstrated the software’s interest in understanding bone impingements after prosthesis and implant design weaknesses.Finally, the positioning of the glenoid implant intraoperatively with a patient specific guide printed in 3D corresponded faithfully to its preoperative planning. However, planning alone already greatly improved this positioning. This Thesis made it possible to validate the performance and use of a software of three-dimensional segmentation and pre-operative planning. Its application is found in several steps of the management of a patient with shoulder arthritis and should gradually be integrated into the daily practice of surgeons
APA, Harvard, Vancouver, ISO, and other styles
9

Rajan, Rachel. "Semi Supervised Learning for Accurate Segmentation of Roughly Labeled Data." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1597082270750151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hayrapetyan, Nare. "Adaptive Re-Segmentation Strategies For Accurate Bright Field Cell Tracking." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1230.

Full text
Abstract:
Understanding complex interactions in cellular systems requires accurate tracking of individual cells observed in microscopic image sequence and acquired from multi-day in vitro experiments. To be effective, methods must follow each cell through the whole experimental sequence to recognize significant phenotypic transitions, such as mitosis, chemotaxis, apoptosis, and cell/cell interactions, and to detect the effect of cell treatments. However, high accuracy long-range cell tracking is difficult because the collection and detection of cells in images is error-prone, and single error in a one frame can cause a tracked cell to be lost. Detection of cells is especially difficult when using bright field microscopy images wherein the contrast difference between the cells and the background is very low. This work introduces a new method that automatically identifies and then corrects tracking errors using a combination of combinatorial registration, flow constraints, and image segmentation repair.
APA, Harvard, Vancouver, ISO, and other styles
11

Bilgin, Arda. "Selection And Fusion Of Multiple Stereo Algorithms For Accurate Disparity Segmentation." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610133/index.pdf.

Full text
Abstract:
Fusion of multiple stereo algorithms is performed in order to obtain accurate disparity segmentation. Reliable disparity map of real-time stereo images is estimated and disparity segmentation is performed for object detection purpose. First, stereo algorithms which have high performance in real-time applications are chosen among the algorithms in the literature and three of them are implemented. Then, the results of these algorithms are fused to gain better performance in disparity estimation. In fusion process, if a pixel has the same disparity value in all algorithms, that disparity value is assigned to the pixel. Other pixels are labelled as unknown disparity. Then, unknown disparity values are estimated by a refinement procedure where neighbourhood disparity information is used. Finally, the resultant disparity map is segmented by using mean shift segmentation. The proposed method is tested in three different stereo data sets and several real stereo pairs. The experimental results indicate an improvement for the stereo analysis performance by the usage of fusion process and refinement procedure. Furthermore, disparity segmentation is realized successfully by using mean shift segmentation for detecting objects at different depth levels.
APA, Harvard, Vancouver, ISO, and other styles
12

Ferreira, Filipa. "Automatic and accurate segmentation of thoratic aortic aneurysms from X-ray CT angiography." Thesis, Kingston University, 2012. http://eprints.kingston.ac.uk/26293/.

Full text
Abstract:
The scoped of this dissertation is to procure and propose a novel fully automated computer aided detection and measurement (CAD/CAM) system of thoracic aortic aneurysms. More explicitly, the objective of the algorithm is to facilitate the segmentation of the thoracic aorta, as accurately as possible and detection of possible existing aneurysms in the Computer Tomography Angiography (CT) images. In biomedical imaging, the manual examination and analysis of aortic aneurysms is a particularly laborious and time-consuming undertaking. Humans are susceptible to committing errors and their analysis is usually subjective and qualitative due to the inter- and intra-observer variability issue. Objective and quantitative analysis facilitated by the application developed in this project leads to a more accurate diagnostic decision by the physician. In this context, the project is concerned with the automatic analysis of thoracic aneurysms from CTA images. The project initially examines the theoretical background of the anatomy of the aorta and aneurysms. The concepts of image segmentation and, in particular, segmentation of vessels methods are reviewed. An algorithm is then developed and implemented, such that it will conform to the requirements put forth in the stated objectives. For purposes of testing the proposed approach, a significant amount of 3D, clinical CTA datasets of thoracic aorta form the framework of the CAD/CAM system. It is followed by presentation and discussion of the results. The system has been validated on a clinical dataset of30 eTA scans of which 28 eTA scans contained aneurysms. There were 30 eTA scans used as training dataset for parameter selection and another 30 eTA scans uses as a test dataset, in total 60 for clinical evaluation. The radiologist visually inspected the CAD and CAM component results and confirmed it correctly detected and segmented the T AA on all datasets, proving to have 100% sensitivity. We were able to conclude that there is distinct potential for.use of our fully automated CAD/CAM system in a real clinical setting. Although other CAD/CAM systems have been developed for other organ detection and even small sections of the thoracic aorta, to this date no fully automated CAD/CAM of the entire thoracic aorta has been developed hence its novelty. To facilitate the proposed CAD/CAM system is integrated in a Medical Images Processing, Seamless and Secure Sharing Platform (MIPS3) which is a friendly user interface that has been developed alongside with this project.
APA, Harvard, Vancouver, ISO, and other styles
13

Guest, Ian. "Digital video moving object segmentation using tensor voting: A non-causal, accurate approach." Doctoral thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/5209.

Full text
Abstract:
Motion based video segmentation is important in many video processing applications such as MPEG4. This thesis presents an exhaustive, non-causal method to estimate boundaries between moving objects in a video clip. It make use of tensor voting principles. The tensor voting is adapted to allow image structure to manifest in the tangential plane of the saliency map. The technique allows direct estimation of motion vectors from second-order tensor analysis. The tensors make maximal and direct use of the available information by encoding it into the dimensionality of the tensor. The tensor voting methodology introduces a non-symmetrical voting kernel to allow a measure of voting skewness to be inferred. Skewness is found in the third-order tensor in the direction of the tangential first eigenvector. This new concept is introduced as the Tensor Skewness Map or TS map. The TS map gives further information about whether an object is occluding or disoccluding another object. The information can be used to infer the layering order of the moving objects in the video clip. Matched filtering and detection are applied to reduce the TS map into occluding and disoccluding detections. The technique is computationally exhaustive, but may find use in off-line video object segmentation processes. The use of commercial-off-the-shelf Graphic Processor Units is demonstrated to scale well to the tensor voting framework, providing the computational speed improvement required to make the framework realisable on a larger scale and to handle tensor dimensionalities higher than before.
APA, Harvard, Vancouver, ISO, and other styles
14

Gorgi, Zadeh Shekoufeh [Verfasser]. "Fast, Accurate and Steerable Segmentation of Drusen in Optical Coherence Tomography / Shekoufeh Gorgi Zadeh." Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1219140244/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fuchs, Patrick [Verfasser], and Christoph [Akademischer Betreuer] Garbe. "Efficient and Accurate Segmentation of Defects in Industrial CT Scans / Patrick Fuchs ; Betreuer: Christoph Garbe." Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/1230475885/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kong, Longbo. "Accurate Joint Detection from Depth Videos towards Pose Analysis." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157524/.

Full text
Abstract:
Joint detection is vital for characterizing human pose and serves as a foundation for a wide range of computer vision applications such as physical training, health care, entertainment. This dissertation proposed two methods to detect joints in the human body for pose analysis. The first method detects joints by combining body model and automatic feature points detection together. The human body model maps the detected extreme points to the corresponding body parts of the model and detects the position of implicit joints. The dominant joints are detected after implicit joints and extreme points are located by a shortest path based methods. The main contribution of this work is a hybrid framework to detect joints on the human body to achieve robustness to different body shapes or proportions, pose variations and occlusions. Another contribution of this work is the idea of using geodesic features of the human body to build a model for guiding the human pose detection and estimation. The second proposed method detects joints by segmenting human body into parts first and then detect joints by making the detection algorithm focusing on each limb. The advantage of applying body part segmentation first is that the body segmentation method narrows down the searching area for each joint so that the joint detection method can provide more stable and accurate results.
APA, Harvard, Vancouver, ISO, and other styles
17

Bílý, Ondřej. "Moderní řečové příznaky používané při diagnóze chorob." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218971.

Full text
Abstract:
This work deals with the diagnosis of Parkinson's disease by analyzing the speech signal. At the beginning of this work there is described speech signal production. The following is a description of the speech signal analysis, its preparation and subsequent feature extraction. Next there is described Parkinson's disease and change of the speech signal by this disability. The following describes the symptoms, which are used for the diagnosis of Parkinson's disease (FCR, VSA, VOT, etc.). Another part of the work deals with the selection and reduction symptoms using the learning algorithms (SVM, ANN, k-NN) and their subsequent evaluation. In the last part of the thesis is described a program to count symptoms. Further is described selection and the end evaluated all the result.
APA, Harvard, Vancouver, ISO, and other styles
18

Blasse, Corinna. "Towards Accurate and Efficient Cell Tracking During Fly Wing Development." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-214923.

Full text
Abstract:
Understanding the development, organization, and function of tissues is a central goal in developmental biology. With modern time-lapse microscopy, it is now possible to image entire tissues during development and thereby localize subcellular proteins. A particularly productive area of research is the study of single layer epithelial tissues, which can be simply described as a 2D manifold. For example, the apical band of cell adhesions in epithelial cell layers actually forms a 2D manifold within the tissue and provides a 2D outline of each cell. The Drosophila melanogaster wing has become an important model system, because its 2D cell organization has the potential to reveal mechanisms that create the final fly wing shape. Other examples include structures that naturally localize at the surface of the tissue, such as the ciliary components of planarians. Data from these time-lapse movies typically consists of mosaics of overlapping 3D stacks. This is necessary because the surface of interest exceeds the field of view of todays microscopes. To quantify cellular tissue dynamics, these mosaics need to be processed in three main steps: (a) Extracting, correcting, and stitching individ- ual stacks into a single, seamless 2D projection per time point, (b) obtaining cell characteristics that occur at individual time points, and (c) determine cell dynamics over time. It is therefore necessary that the applied methods are capable of handling large amounts of data efficiently, while still producing accurate results. This task is made especially difficult by the low signal to noise ratios that are typical in live-cell imaging. In this PhD thesis, I develop algorithms that cover all three processing tasks men- tioned above and apply them in the analysis of polarity and tissue dynamics in large epithelial cell layers, namely the Drosophila wing and the planarian epithelium. First, I introduce an efficient pipeline that preprocesses raw image mosaics. This pipeline accurately extracts the stained surface of interest from each raw image stack and projects it onto a single 2D plane. It then corrects uneven illumination, aligns all mosaic planes, and adjusts brightness and contrast before finally stitching the processed images together. This preprocessing does not only significantly reduce the data quantity, but also simplifies downstream data analyses. Here, I apply this pipeline to datasets of the developing fly wing as well as a planarian epithelium. I additionally address the problem of determining cell polarities in chemically fixed samples of planarians. Here, I introduce a method that automatically estimates cell polarities by computing the orientation of rootlets in motile cilia. With this technique one can for the first time routinely measure and visualize how tissue polarities are established and maintained in entire planarian epithelia. Finally, I analyze cell migration patterns in the entire developing wing tissue in Drosophila. At each time point, cells are segmented using a progressive merging ap- proach with merging criteria that take typical cell shape characteristics into account. The method enforces biologically relevant constraints to improve the quality of the resulting segmentations. For cases where a full cell tracking is desired, I introduce a pipeline using a tracking-by-assignment approach. This allows me to link cells over time while considering critical events such as cell divisions or cell death. This work presents a very accurate large-scale cell tracking pipeline and opens up many avenues for further study including several in-vivo perturbation experiments as well as biophysical modeling. The methods introduced in this thesis are examples for computational pipelines that catalyze biological insights by enabling the quantification of tissue scale phenomena and dynamics. I provide not only detailed descriptions of the methods, but also show how they perform on concrete biological research projects.
APA, Harvard, Vancouver, ISO, and other styles
19

Dutailly, Bruno. "Plongement de surfaces continues dans des surfaces discrètes épaisses." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0444/document.

Full text
Abstract:
Dans le contexte des sciences archéologiques, des images tridimensionnelles issues de scanners tomodensitométriques sont segmentées en régions d’intérêt afin d’en faire une analyse. Ces objets virtuels sont souvent utilisés dans le but d’effectuer des mesures précises. Une partie de ces analyses nécessite d’extraire la surface des régions d’intérêt. Cette thèse se place dans ce cadre et vise à améliorer la précision de l’extraction de surface. Nous présentons dans ce document nos contributions : tout d’abord, l’algorithme du HMH pondéré dont l’objectif est de positionner précisément un point à l’interface entre deux matériaux. Appliquée à une extraction de surface, cette méthode pose des problèmes de topologie sur la surface résultante. Nous avons donc proposé deux autres méthodes : la méthode du HMH discret qui permet de raffiner la segmentation d’objet 3D, et la méthode du HMH surfacique qui permet une extraction de surface contrainte garantissant l’obtention d’une surface topologiquement correcte. Il est possible d’enchainer ces deux méthodes sur une image 3D pré-segmentée afin d’obtenir une extraction de surface précise des objets d’intérêt. Ces méthodes ont été évaluées sur des acquisitions simulées d’objets synthétiques et des acquisitions réelles d’artéfacts archéologiques
In the context of archaeological sciences, 3D images produced by Computer Tomography scanners are segmented into regions of interest corresponding to virtual objects in order to make some scientific analysis. These virtual objects are often used for the purpose of performing accurate measurements. Some of these analysis require extracting the surface of the regions of interest. This PhD falls within this framework and aims to improve the accuracy of surface extraction. We present in this document our contributions : first of all, the weighted HMH algorithm whose objective is to position precisely a point at the interface between two materials. But, applied to surface extraction, this method often leads to topology problems on the resulting surface. So we proposed two other methods : The discrete HMH method which allows to refine the 3D object segmentation, and the surface HMH method which allows a constrained surface extraction ensuring a topologically correct surface. It is possible to link these two methods on a pre-segmented 3D image in order to obtain a precise surface extraction of the objects of interest These methods were evaluated on simulated CT-scan acquisitions of synthetic objects and real acquisitions of archaeological artefacts
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Chien-Ho, and 王健合. "Improve Object Segmentation Accuracy using Modified U-net." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394084%22.&searchmode=basic.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系所
107
In recent years, Artificial Intelligence take technology as representation in the 21st century,with the rapid development of science and technology.As far as Artificial Intelligence product is concerned, for Artificial Intelligence products, it is often used in visual image processing to identify various objects , such as medical images.The identification cells in the identification of medical images can be divided into traditional morphological processing methods and modern processing methods combined with artificial intelligence. With the traditional morphological processing method, there is a problem of threshold resetting. Even if the same cell image on binarized image processing, the threshold will not be the same, and it is difficult to find a threshold adaptation,in the whole picture.The accuracy is usually not too high in the recognition rate of medical images. With the development of artificial intelligence, examples of using deep learning to process visual images are now springing up like mushrooms, medical images is no exception. In the part processing with medical images, the Convolutional Neural Network (CNN) is often used to capture the characteristics of the target.CNN is neural networks which have performed well in large-scale image processing in the past and are often used for artificial intelligence image recognition, such as Google''s image search. With the advent of convolutional neural networks, the shortcomings of traditional medical image processing methods have been solved. In this paper, we have chosen the U-net [1], which is often used to process medical images, modify the architecture, and added the concept of popular GoogleNet and residual learning in recent years, our architecture has an accuracy improvement of about 1% . In addition, in this paper, we use this structure to preprocess medical images, and finally complete the automation with OpenCV Cell count.
APA, Harvard, Vancouver, ISO, and other styles
21

Tsang, Che-Yuan, and 臧哲遠. "Use Segmentation Corpus to Extend Chinese Treebank and Improve Parser Accuracy." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/833r2c.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系
106
In the 1990s, there were successively established English parse tree databases (Treebank). The Chinese treebank in Taiwan was also established in 1997. With the development of Natural Language Processing, the applications that the treebank can do are booming. However, sparsity problems arise in related research due to the small number of Chinese parse trees in the existing Treebank. This thesis is divided into two parts. The first part explores the use of Chinese parser and segmentation corpus to extend the parse tree. Without using manual marking, a large number of Chinese sentences are parsed into Chinese parse trees by using a parser. And through the verification system to filter out the unqualified trees, the present study expands the numbers of Chinese parse trees to solve the sparsity problem. The second part uses the extend parse trees in the first part for further research. A new probability rules model is proposed to improve the accuracy of the Chinese parser. Finally, the LF score of the HM_noCount1 model is 85.23%, and the BF score of the HM_Head_noCount1 model is 89.68%. The average number of rules is increased from 7.6 to 263.5, and the noCount1 model has been confirmed that the sparsity problems are greatly reduced.
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, I.-Fan, and 盧奕帆. "High Accuracy and High Robust Natural Image Segmentation Algorithm without Parameter Adjusting." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/70896203244034257617.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
103
In computer vision and image processing, image segmentation is always an important fundamental work. Though this topic has been researched for many years, it is still a challenging task to well segment most of the natural images automatically without adjusting any parameter. Recently, the researches of superpixels have great improvement. This new technique makes the traditional segmentation algorithms more efficient and has better performances. In this thesis, an automatic image segmentation algorithm based on superpixels and many other techniques is proposed. It can accurately segment almost all of the natural images without parameter adjustment. In our algorithm, the techniques of entropy rate superpixels (ERSs), edge detection, saliency detection, and computing texture feature are adopted. With the aid of ERSs, the proposed algorithm can be implemented very efficiently. To prevent over-merge of superpixels, modified edge detection which computes the gradient information of the contours and the interiors of superpixels is used. Saliency detection and the texture features of an image are also used to prevent over-segmentation. Moreover, an adaptive threshold is also used for superpixel merging. These techniques make the segmentation result more consistent with human perception without adjusting any parameter. Simulations show that our proposed method can well segment most of natural images and outperform state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Huang, Wan-ling, and 黃琬玲. "Enhancing the Accuracy of Long Sentence in Simultaneous Interpreting through Sight Translation Segmentation Skill." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/37290340292113247404.

Full text
Abstract:
碩士
國立彰化師範大學
翻譯研究所
101
Sight Translation (ST) has long been regarded as a warm-up exercise prior to Simultaneous Interpreting (SI) in interpreting training programs, but the discussion of its importance is still limited to professional interpreters, interpreting trainers and interpreting and translation scholars. The empirical research on the benefit of sight translation practice toward simultaneous interpreting remains an area that is under-researched and under-discussed. There are three major skills in ST: translating according to the syntactic order, segmentation and coordination. Among them, segmentation is the most important skill in sight translating long sentences. This study aims to investigate the benefit of sight translation segmentation skill toward simultaneous interpreting and attempts to find out to what extent the skill of segmentation in ST can transfer to SI and in what way segmentation of ST enhance the accuracy of long sentence interpreting during SI. The experiment involved 12 interpreting students who had Chinese A and English B. The experiment encompassed two stages in which subjects were asked to launch interpretation from English to Chinese during the first round of ST task and then the second round of SI task. After finishing both tasks, subjects were asked to fill in post-questionnaire. Based on subjects’ ST performance, they were divided into the more competent Group A and less competent Group B so as to compare long sentences interpreting performance among groups. The findings suggested that the more competent Group A cut sentences more consistently, having less segments and forming more complete syntactic entities, leading to lower omission rate and higher accuracy rate; on the other hand, the less competent Group B cut sentences in a more spontaneous way, having more segments and forming less complete syntactic entities, resulting in higher omission rate and lower accuracy rate. Last but not least, based on the research findings, it is hoped that more attention would be paid to segmentation in ST training programs which in turns can enhance SI long sentences performance for interpreting students.
APA, Harvard, Vancouver, ISO, and other styles
24

Das, Tanmoy. "Land use / land cover change detection: an object oriented approach, Münster, Germany." Master's thesis, 2009. http://hdl.handle.net/10362/2532.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Land use / land cover (LULC) change detection based on remote sensing (RS) data has been established as an indispensible tool for providing suitable and wide-ranging information to various decision support systems for natural resource management and sustainable development. LULC change is one of the major influencing factors for landscape changes. There are many change detection techniques developed over decades, in practice, it is still difficult to develop a suitable change detection method especially in case of urban and urban fringe areas where several impacts of complex factors are found including rapid changes from rural land uses to residential, commercial, industrial and recreational uses. Although these changes can be monitored using several techniques of RS application, adopting a suitable technique to represent the changes accurately is a challenging task. There are a number of challenges in RS application for analysis of LULC change detection. This study applies objectoriented (OO) method for mapping LULC and performing change detection analysis using post-classification technique.(...)
APA, Harvard, Vancouver, ISO, and other styles
25

"Segmentation based variational model for accurate optical flow estimation." 2009. http://library.cuhk.edu.hk/record=b5894018.

Full text
Abstract:
Chen, Jianing.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 47-54).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.2 --- Related Work --- p.3
Chapter 1.3 --- Thesis Organization --- p.5
Chapter 2 --- Review on Optical Flow Estimation --- p.6
Chapter 2.1 --- Variational Model --- p.6
Chapter 2.1.1 --- Basic Assumptions and Constraints --- p.6
Chapter 2.1.2 --- More General Energy Functional --- p.9
Chapter 2.2 --- Discontinuity Preserving Techniques --- p.9
Chapter 2.2.1 --- Data Term Robustification --- p.10
Chapter 2.2.2 --- Diffusion Based Regularization --- p.11
Chapter 2.2.3 --- Segmentation --- p.15
Chapter 2.3 --- Chapter Summary --- p.15
Chapter 3 --- Segmentation Based Optical Flow Estimation --- p.17
Chapter 3.1 --- Initial Flow --- p.17
Chapter 3.2 --- Color-Motion Segmentation --- p.19
Chapter 3.3 --- Parametric Flow Estimating Incorporating Segmentation --- p.21
Chapter 3.4 --- Confidence Map Construction --- p.24
Chapter 3.4.1 --- Occlusion detection --- p.24
Chapter 3.4.2 --- Pixel-wise motion coherence --- p.24
Chapter 3.4.3 --- Segment-wise model confidence --- p.26
Chapter 3.5 --- Final Combined Variational Model --- p.28
Chapter 3.6 --- Chapter Summary --- p.28
Chapter 4 --- Experiment Results --- p.30
Chapter 4.1 --- Quantitative Evaluation --- p.30
Chapter 4.2 --- Warping Results --- p.34
Chapter 4.3 --- Chapter Summary --- p.35
Chapter 5 --- Application - Single Image Animation --- p.37
Chapter 5.1 --- Introduction --- p.37
Chapter 5.2 --- Approach --- p.38
Chapter 5.2.1 --- Pre-Process Stage --- p.39
Chapter 5.2.2 --- Coordinate Transform --- p.39
Chapter 5.2.3 --- Motion Field Transfer --- p.41
Chapter 5.2.4 --- Motion Editing and Apply --- p.41
Chapter 5.2.5 --- Gradient-domain composition --- p.42
Chapter 5.3 --- Experiments --- p.43
Chapter 5.3.1 --- Active Motion Transfer --- p.43
Chapter 5.3.2 --- Animate Stationary Temporal Dynamics --- p.44
Chapter 5.4 --- Chapter Summary --- p.45
Chapter 6 --- Conclusion --- p.46
Bibliography --- p.47
APA, Harvard, Vancouver, ISO, and other styles
26

Lien, I.-Chan, and 連翊展. "AILIS: An Adaptive and Iterative Learning Method for Accurate Iris Segmentation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/74429231465696846317.

Full text
Abstract:
碩士
國立中央大學
軟體工程研究所
104
Iris segmentation is one of the most important pre-processing stage for an iris recognition system. The quality of iris segmentation results dictates the iris recognition performance. In the past, methods of either learning-based (for example, neural network) or non-learning-based (for example, Hough Transform) have been proposed to deal with this topic. However, there does not exist an objective and quantitative figure of merit in terms of quality assessment for iris segmentation (to judge whether a segmentation hypothesis is accurate or not). Most existing works evaluated their iris segmentation quality by human. In this work, we propose KIRD, a mechanism to fairly judge the correctness of iris segmentation hypotheses. On the foundation of KIRD, we propose AILIS, which is an adaptive and iterative learning method for iris segmentation. AILIS is able to learn from past experience and automatically build machine-learning models for iris segmentation for both gray-scale and colored iris images. Experimental results show that, without any prior training, AILIS can successfully perform iris segmentation on ICE (gray-scale images) and UBIRIS (colored) to the accuracy rate of 99.39% and 94.60%, respectively. Large-scale iris recognition experiments based on AILIS segmentation hypotheses also validated its effectiveness, compared to the state-of-the-art algorithm.
APA, Harvard, Vancouver, ISO, and other styles
27

Al-Waisy, Alaa S., Rami S. R. Qahwaji, Stanley S. Ipson, and Shumoos Al-Fahdawi. "A Fast and Accurate Iris Localization Technique for Healthcare Security System." 2015. http://hdl.handle.net/10454/16599.

Full text
Abstract:
yes
In the health care systems, a high security level is required to protect extremely sensitive patient records. The goal is to provide a secure access to the right records at the right time with high patient privacy. As the most accurate biometric system, the iris recognition can play a significant role in healthcare applications for accurate patient identification. In this paper, the corner stone towards building a fast and robust iris recognition system for healthcare applications is addressed, which is known as iris localization. Iris localization is an essential step for efficient iris recognition systems. The presence of extraneous features such as eyelashes, eyelids, pupil and reflection spots make the correct iris localization challenging. In this paper, an efficient and automatic method is presented for the inner and outer iris boundary localization. The inner pupil boundary is detected after eliminating specular reflections using a combination of thresholding and morphological operations. Then, the outer iris boundary is detected using the modified Circular Hough transform. An efficient preprocessing procedure is proposed to enhance the iris boundary by applying 2D Gaussian filter and Histogram equalization processes. In addition, the pupil’s parameters (e.g. radius and center coordinates) are employed to reduce the search time of the Hough transform by discarding the unnecessary edge points within the iris region. Finally, a robust and fast eyelids detection algorithm is developed which employs an anisotropic diffusion filter with Radon transform to fit the upper and lower eyelids boundaries. The performance of the proposed method is tested on two databases: CASIA Version 1.0 and SDUMLA-HMT iris database. The Experimental results demonstrate the efficiency of the proposed method. Moreover, a comparative study with other established methods is also carried out.
APA, Harvard, Vancouver, ISO, and other styles
28

Al-Fahdawi, Shumoos, Rami S. R. Qahwaji, Alaa S. Al-Waisy, and Stanley S. Ipson. "An automatic corneal subbasal nerve registration system using FFT and phase correlation techniques for an accurate DPN diagnosis." 2015. http://hdl.handle.net/10454/16601.

Full text
Abstract:
yes
Confocal microscopy is employed as a fast and non-invasive way to capture a sequence of images from different layers and membranes of the cornea. The captured images are used to extract useful and helpful clinical information for early diagnosis of corneal diseases such as, Diabetic Peripheral Neuropathy (DPN). In this paper, an automatic corneal subbasal nerve registration system is proposed. The main aim of the proposed system is to produce a new informative corneal image that contains structural and functional information. In addition a colour coded corneal image map is produced by overlaying a sequence of Cornea Confocal Microscopy (CCM) images that differ in their displacement, illumination, scaling, and rotation to each other. An automatic image registration method is proposed based on combining the advantages of Fast Fourier Transform (FFT) and phase correlation techniques. The proposed registration algorithm searches for the best common features between a number of sequenced CCM images in the frequency domain to produce the formative image map. In this generated image map, each colour represents the severity level of a specific clinical feature that can be used to give ophthalmologists a clear and precise representation of the extracted clinical features from each nerve in the image map. Moreover, successful implementation of the proposed system and the availability of the required datasets opens the door for other interesting ideas; for instance, it can be used to give ophthalmologists a summarized and objective description about a diabetic patient’s health status using a sequence of CCM images that have been captured from different imaging devices and/or at different times
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography