Tesis sobre el tema "Image processing Digital techniques"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Image processing Digital techniques.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Image processing Digital techniques".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Bibby, Geoffrey Thomas. "Digital image processing using parallel processing techniques". Thesis, Liverpool John Moores University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304539.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cloete, Eric. "Nonlinear smoothers for digital image processing". Thesis, Cape Technikon, 1997. http://hdl.handle.net/20.500.11838/2073.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Thesis (DTech(Business Informatics))--Cape Technikon, Cape Town, 1997
Modem applications in computer graphics and telecommunications command high performance filtering and smoothing to be implemented. The recent development of a new class of max-min selectors for digital image processing is investigated with special emphasis on the practical implications for hardware and software design.
3

Seemann, Torsten 1973. "Digital image processing using local segmentation". Monash University, School of Computer Science and Software Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8055.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Marokkey, Sajan Raphael. "Digital techniques for dynamic visualization in photomechanics". Thesis, Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14670896.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Musoke, David. "Digital image processing with the Motorola 56001 digital signal processor". Scholarly Commons, 1992. https://scholarlycommons.pacific.edu/uop_etds/2236.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This report describes the design and testing of the Image56 system, an IBM-AT based system which consists of an analog video board and a digital board. The former contains all analog and video support circuitry to perform real-time image processing functions. The latter is responsible for performing non real-time, complex image processing tasks using a Motorola DSP56001 digital signal processor. It is supported by eight image data buffers and 512K words of DSP memory (see Appendix A for schematic diagram).
6

Chen, Shuangli. "Application of digital image processing techniques to multiphase flowmeasurements". Thesis, Heriot-Watt University, 1993. http://hdl.handle.net/10399/1462.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hertz, Lois. "Robust image thresholding techniques for automated scene analysis". Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/15050.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lambert, T. "Digital Enhancement Techniques for Underwater Video Image Sequences". Thesis, Honours thesis, University of Tasmania, 2005. https://eprints.utas.edu.au/253/1/tristanlthesis.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Due to concern about the current state of the world's oceans, several large scale scientific projects have begun to investigate the condition of our oceans. These projects are making use of underwater video sequences to monitor marine species. The move to using underwater video monitoring introduces labour intensive manual processing techniques. This leads to the need for an automated system capable of processing the data at a much greater speed. This project investigated whether the development of suitable image processing techniques could be used for pre-processing underwater images from a fish farm and locating fish within these images using computer vision techniques. Using underwater images leads to some serious problems when compared to images from a clearer environment. Visibility in an underwater environment is poor, even when using state of the art equipment. After reviewing the broad field of computer vision and current underwater projects, an image pre-processing system was developed in MATLAB using suitable image processing and analysis techniques. The application developed was able to successfully locate an acceptable number of fish within the underwater images. The project demonstrated that automated analysis of underwater video images is needed and is possible. Automatic processing of large quantities of video image sequences will be of great benefit in the future. It will allow scientific researchers to study the ocean environment and its species more effectively. Pre-processing is an essential component of the overall process that will lead to automation of underwater video data analysis for marine science applications.
9

Yau, Chin-ko y 游展高. "Super-resolution image restoration from multiple decimated, blurred and noisy images". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30292529.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Arrowood, Joseph Louis Jr. "Theory and application of adaptive filter banks". Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15369.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ng, King-to y 吳景濤. "Compression techniques for image-based representations". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31244646.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Pienkowski, A. "Fuzzy techniques in digital image processing for artificial colour matching". Thesis, University of Essex, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383526.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Hamed, Mahmoud S. "Film and video restoration using nonlinear digital image processing techniques". Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400321.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Zhu, Yong. "Digital signal and image processing techniques for ultrasonic nondestructive evaluation". Thesis, City University London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336431.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gong, Rong Hua. "Digital image processing techniques for the investigation of nonwoven structures". Thesis, University of Manchester, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488170.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Digital image processing techniques have been used to analyse structural and geometrical properties of nonwoven fabrics. The development of automatic measurement methods for pore size and fibre orientation distribution has been the main focus. The measurement algorithms, together with some preliminary testing results, are discussed in this thesis. Pore size distribution is a crucial design criterion in many industrial, particularly geotechnical applications of textiles and an important product specification parameter. The orientation of fibres in nonwoven fabrics is one of the determinant factors of the physical behaviour of the fabrics. Reviews of literature have indicated that the existing methods of determination of these two properties are laborious, time consuming and, in some cases, controversial. The automatic methods described herein enable accurate evaluation of pore sizes and fibre orientation to be achieved far more quickly and economically. The system has the advantage of being fully programmable, thus will be available for other work. A review of the work was presented to the UMIST Nonwoven Conference, June 1988.
16

Hu, Yongtao y 胡永涛. "Multimodal speaker localization and identification for video processing". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/212633.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Liu, Sam J. "Low bit-rate image and video compression using adaptive segmentation and quantization". Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14850.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Suen, Tsz-yin Simon y 孫子彥. "Curvature domain stitching of digital photographs". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38800901.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Abdulhadi, Abdulwanis Abdalla. "Evaluation of spot welding electrodes using digital image processing and image segmentation techniques". Thesis, Liverpool John Moores University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589998.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The image segmentation algorithm is the most challenging step and requires more computer processing power than the boundary filtering, and the Cullen et al's method, which used the Cullen et al's method to determine the electrodes tip width automatically in the automotive industry in real time. Spot welding is used extensively in the automotive industry. The quality of an individual spot weld is a major concern due to the heavy reliance on their use in the manufacture of motor vehicles. The main parameters that control the quality of a spot weld are current, voltage, welding force, welding time, and the quality of welding electrodes. The condition of the welding electrodes plays a major part in determining the quality of a spot weld. For example, excessive electrode wear can occur during the welding process and can cause weakening in the weld nuggets. As the number of welds increases, the electrode tip wears down and so the contact area between electrode tip and work piece increases. In order to determine the quality of the welding electrodes, a machine vision approach is employed, where images of the electrode tips in real time are captured and are processed using various image-processing algorithms. These algorithms can be used to automatically measure the electrode tip width and hence assess the quality of the electrodes tip in real time. The quality of two types of spot welding electrode tips, namely flat-shaped and dome-shaped tips, is assessed here using image processing techniques. For each tip type, a database of 250 images is used to test the performance of the tested algorithms. Also the tip width in these 250 images is determined manually by counting the number of pixels using an image editor such as Microsoft Paint. An excellent agreement is found between these manual and automatic methods. The tip width for an electrode is measured by first grabbing an image showing the electrode. The electrode in the image is then extracted using an image segmentation algorithm. Then the boundary of the electrode is determined and filtered. The Cullen et aI's method is subsequently applied, which uses the filtered boundary to determine the tip width. A number of image segmentation and boundary filtering algorithms have been used to determine the tip width automatically. For flat tip electrode, the combination of the region growing image segmentation, Minimum Perimeter Polygon, and Cull en et al's techniques was capable of automatically determining the tip width for 250 images with a root mean square error of 7.5 % of the tip width. For dome-shaped electrodes, the combination of the Snake segmentation algorithm, Fourier transform, and the Cullen et al's method was capable of automatically determining the tip width for 250 images with a root mean square error of2.9 % of the tip width. The author has proposed and built an active illumination system that captures a backlit image of the electrode's shadow, this system has different camera with same time then above. The image is then processed using a simple image segmentation method, such as the Canny filtering algorithm to locate the boundary of the electrodes tip. Then the boundary is processed using Minimum-Perimeter Polygon approach and Cull en et aI's method to automatically determine the tip width for 200 experiments images. The proposed system is capable of determining the tip width automatically with a root mean square error of 3.2% of the total tip width for flat tips and 3% for dome tips.
20

Robins, Michael John. "Local energy feature tracing in digital images and volumes". University of Western Australia. Dept. of Computer Science, 1999. http://theses.library.uwa.edu.au/adt-WU2003.0010.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Digital image feature detectors often comprise two stages of processing: an initial filtering phase and a secondary search stage. The initial filtering is designed to accentuate specific feature characteristics or suppress spurious components of the image signal. The second stage of processing involves searching the results for various criteria that will identify the locations of the image features. The local energy feature detection scheme combines the squares of the signal convolved with a pair of filters that are in quadrature with each other. The resulting local energy value is proportional to phase congruency which is a measure of the local alignment of the phases of the signals constituent Fourier components. Points of local maximum phase alignment have been shown to correspond to visual features in the image. The local energy calculation accentuates the location of many types of image features, such as lines, edges and ramps and estimates of local energy can be calculated in multidimensional image data by rotating the quadrature filters to several orientations. The second stage search criterion for local energy is to locate the points that lie along the ridges in the energy map that connect the points of local maxima. In three dimensional data the relatively higher energy values will form films between connecting laments and tendrils. This thesis examines the use of recursive spatial domain filtering to calculate local energy. A quadrature pair of filters which are based on the first derivative of the Gaussian function and its Hilbert transform, are rotated in space using a kernel of basis functions to obtain various orientations of the filters. The kernel is designed to be separable and each term is implemented using a recursive digital filter. Once local energy has been calculated the ridges and surfaces of high energy values are determined using a flooding technique. Starting from the points of local minima we perform an ablative skeletonisation of the higher energy values. The topology of the original set is maintained by examining and preserving the topology of the neighbourhood of each point when considering it for removal. This combination of homotopic skeletonisation and sequential processing of each level of energy values, results in a well located, thinned and connected tracing of the ridges. The thesis contains examples of the local energy calculation using steerable recursive filters and the ridge tracing algorithm applied to two and three dimensional images. Details of the algorithms are contained in the text and details of their computer implementation are provided in the appendices.
21

Yao, Xinzhi y 姚欣志. "Object-based coding and watermarking for image-based rendering". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2015. http://hdl.handle.net/10722/212635.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

MORGAN, KEITH PATRICK. "IMPROVED METHODS OF IMAGE SMOOTHING AND RESTORATION (NONSTATIONARY MODELS)". Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187959.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The problems of noise removal, and simultaneous noise removal and deblurring of imagery are common to many areas of science. An approach which allows for the unified treatment of both problems involves modeling imagery as a sample of a random process. Various nonstationary image models are explored in this context. Attention is directed to identifying the model parameters from imagery which has been corrupted by noise and possibly blur, and the use of the model to form an optimal reconstruction of the image. Throughout the work, emphasis is placed on both theoretical development and practical considerations involved in achieving this reconstruction. The results indicate that the use of nonstationary image models offers considerable improvement over traditional techniques.
23

Saunders, Heath G. "Wax distribution quantification using digital image analysis techniques". Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-11102009-020230/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kim, Se Hun. "Accuracy-energy tradeoffs in digital image processing using embedded computing platforms". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42881.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
As more and more multimedia applications are integrated in mobile devices, a significant amount of energy is devoted to digital signal processing (DSP). Thus, reducing energy consumption for DSP systems has become an important design goal for battery operated mobile devices. Since supply voltage scaling is one of the most effective methods to reduce power/energy consumption, this study examines aggressive voltage scaling to achieve significant energy savings by allowing some output quality degradation for error tolerant image processing system. The objective of proposed research is to explore ultra-low energy image processing system design methodologies based on efficient accuracy (quality)-energy tradeoffs. This dissertation presents several new analyses and techniques to achieve significant energy savings without noticeable quality degradation under aggressive voltage scaling. In the first, this work starts from accurate error analysis and a model based on input sequence dependent delay estimation. Based on the analysis, we explain the dependence of voltage scalability on input image types, which may be used for input dependent adaptive control for optimal accuracy-energy tradeoffs. In addition, this work includes the system-level analysis of the impact of aggressive voltage scaling on overall energy consumption and a low-cost technique to reduce overall energy consumption. Lastly, this research exploits an error concealment technique to improve the efficiency of accuracy-energy tradeoffs. For an image compression system, the technique minimizes the impact of delay errors on output quality while allowing very low voltage operations for significant energy reduction.
25

BRUEGGE, THOMAS JOSEPH. "THE USE OF FINITE IMPULSE RESPONSE KERNELS FOR IMAGE RESTORATION". Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187974.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This dissertation examines the suitability of Display-Processor (DP) image computers for image enhancement and restoration tasks. Because the major architectural feature of the DP devices is their ability to rapidly evaluate finite impulse response (FIR) convolutions, much of the study focusses on the use of spatial-domain FIR convolutions to approximate Fourier-domain filtering. When the enhancement task requires the evaluation of only a single convolution, it is important that the FIR kernel used to implement the convolution is designed so that the resulting output is a good approximation of the desired output. A Minimum-Mean-Squared-Error design criterion is introduced for the purpose of FIR kernel design and its usefulness is demonstrated by showing some results of its use. If the restoration or enhancement task requires multiple convolutions in an iterative algorithm, it is important to understand how the truncation of the kernel to a finite region of support will affect the convergence properties of an algorithm and the output of the iterative sequence. These questions are examined for a limited class of nonlinear restoration algorithms. Because FIR convolutions are most efficiently performed on computing machines that have limited precision and are usually limited to performing fixed-point arithmetic, the dissertation also examines the effects of roundoff error on output images that have been computed using fixed point math. The number of bits that are needed to represent the data during a computation is algorithm dependent, but for a limited class of algorithms, it is shown that 12 bits are sufficient. Finally, those architectural features in a DP that are necessary for useful enhancement and restoration operations are identified.
26

Vercillo, Richard 1953. "Very high resolution video display memory and base image memory for a radiologic image analysis console". Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276707.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Digital radiographic images are created by a variety of diagnostic imaging modalities. A multi-modality workstation, known as the Arizona Viewing Console (AVC), was designed and built by the University of Arizona Radiology Department to support research in radiographic image processing and image display. Two specially designed VMEbus components, the base image memory and the video display memory, were integrated into the AVC and are the subject of this thesis. The base image memory is a multi-ported, 8 megabyte memory array based on random access memory used for raw image storage. It supports a 10 megapixel per second image processor and can interface to a 320 megabit per second network. The video display memory utilizes video memories and is capable of displaying two independent high resolution images, each 1024 pixels by 1536 lines, on separate video monitors. In part, these two memory designs have allowed the AVC to excel as a radiographic image workstation.
27

Yeo, Si Yong. "Implicit deformable models for biomedical image segmentation". Thesis, Swansea University, 2011. https://cronfa.swan.ac.uk/Record/cronfa42416.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
In this thesis, new methods for the efficient segmentation of images are presented. The proposed methods are based on the deformable model approach, and can be used efficiently in the segmentation of complex geometries from various imaging modalities. A novel deformable model that is based on a geometrically induced external force field which can be conveniently generalized to arbitrary dimensions is presented. This external force field is based on hypothesized interactions between the relative geometries of the deformable model and the object boundary characterized by image gradient. The evolution of the deformable model is solved using the level set method so that topological changes are handled automatically. The relative geometrical configurations between the deformable model and the object boundaries contributes to a dynamic vector force field that changes accordingly as the deformable model evolves. The geometrically induced dynamic interaction force has been shown to greatly improve the deformable model performance in acquiring complex geometries and highly concave boundaries, and give the deformable model a high invariance in initialization configurations. The voxel interactions across the whole image domain provides a global view of the object boundary representation, giving the external force a long attraction range. The bidirectionality of the external force held allows the new deformable model to deal with arbitrary cross-boundary initializations, and facilitates the handling of weak edges and broken boundaries. In addition, it is shown that by enhancing the geometrical interaction field with a nonlocal edge-preserving algorithm, the new deformable model can effectively overcome image noise. A comparative study on the segmentation of various geometries with different topologies from both synthetic and real images is provided, and the proposed method is shown to achieve significant improvements against several existing techniques. A robust framework for the segmentation of vascular geometries is described. In particular, the framework consists of image denoising, optimal object edge representation, and segmentation using implicit deformable model. The image denoising is based on vessel enhancing diffusion which can be used to smooth out image noise and enhance the vessel structures. The image object boundaries are derived using an edge detection technique which can produce object edges of single pixel width. The image edge information is then used to derive the geometric interaction field for optimal object edge representation. The vascular geometries are segmented using an implict deformable model. A region constraint is added to the deformable model which allows it to easily get around calcified regions and propagate across the vessels to segment the structures efficiently. The presented framework is ai)plied in the accurate segmentation of carotid geometries from medical images. A new segmentation model with statistical shape prior using a variational approach is also presented in this thesis. The proposed model consists of an image attraction force that propagates contours towards image object boundaries, and a global shape force that attracts the model towards similar shapes in the statistical shape distribution. The image attraction force is derived from gradient vector interactions across the whole image domain, which makes the model more robust to image noise, weak edges and initializations. The statistical shape information is incorporated using kernel density estimation, which allows the shape prior model to handle arbitrary shape variations. It is shown that the proposed model with shape prior can be used to segment object shapes from images efficiently.
28

Carswell, James. "Using Raster Sketches for Digital Image Retrieval". Fogler Library, University of Maine, 2000. http://www.library.umaine.edu/theses/pdf/CarswellJD2000.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Wisby, C. "Real-time digital imaging techniques for flow visualization". Thesis, University of Surrey, 1989. http://epubs.surrey.ac.uk/848586/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
A real-time digital imaging technique has been applied to smoke flow visualized turbulent flows to provide statistical data concerning bluff body wakes. The 'digital imaging technique' has been successfully applied to the wake of a two-dimensional flat plate, circular cylinder and a jet in a crossflow configuration. A detailed study of the two-dimensional flat plate model involved comparative hot-wire and pressure measurements combined with data from previously published experimental investigations. The results obtained included, intermittency measurements, vortex shedding spectral analyses (autocorrelations), spatial correlations, wake interface statistics and turbulence data. In the majority of cases, the digital imaging technique was found to provide excellent quantitative detail whilst also offering some unique wake interface statistics. The experiments conducted on the circular cylinder model revealed details of secondary vortex shedding and their base-bleed dependence, whilst the jet in a crossflow configuration enabled the imaging technique to be applied to a complex, three-dimensional flow model. The resulting iso-intermittency contour map was produced expediently, and within an experimental period far shorter than could be expected for single-location probe measurements. In addition to the above-outlined quantitative technique, real-time digital imaging was also applied more qualitatively to the study of dynamic stall on an aerofoil and to the enhancement of high-speed vapour-screen visualizations, both techniques offering the possibility for enhanced quantitative flow studies in future investigations. Finally, true-colour video digitisation has been exploited in a preliminary study of the quantification of global surface shear stress values using liquid crystal technology. Although in its infancy, the realisation of an experimental procedure along such lines would be of immense benefit to experimental aerodynamic research.
30

Panzitta, Michael James. "Tessellation for computer image generation". Master's thesis, University of Central Florida, 1987. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/2611.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
University of Central Florida College of Engineering Thesis
Of the vast number of algorithms used in modern computer image generation, most rely upon data bases comprised of polygons. This becomes a severe impediment when curved objects must be modeled and displayed with an acceptable level of speed and accuracy. A technique is needed to provide a means of modeling curved surfaces, storing them in a data base, and displaying them using existing algorithms. Tessellation is one methd of achieving such goals.
M.S.
Masters;
College of Engineering
Engineering
Engineering
51 p.
vi, 51 leaves, bound : ill. ; 28 cm.
31

Deng, Hao. "Mathematical approaches to digital color image denoising". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31708.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Thesis (Ph.D)--Mathematics, Georgia Institute of Technology, 2010.
Committee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
32

Celik, Mehmet Kemal. "Digital image segmentation using periodic codings". Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/80099.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Digital image segmentation using periodic codings is explored with reference to two applications. First, the application of uniform periodic codings, to the problem of segmenting the in-focus regions in an image from the blurred parts, is discussed. The work presented in this part extends a previous investigation on this subject by considering the leakage effects. The method proposed consists of two stages. In each stage, filtering is done in the spatial frequency domain after uniform grating functions are applied to the images in the spatial domain. Then, algorithms for finding the period and phase of a physical grating are explored for a hybrid optical-digital application of the method. Second, a model for textures as the linear superposition of periodic narrowband components, defined as tones, is proposed. A priori information about the number of the tones, their spatial frequencies, and coefficients is necessary to generate tone and texture indicators. Tone indicators are obtained by filtering the image with complex analytical functions defined by the spatial frequencies of the tones present in the image. A criterion for choosing the dimensions of the filter is also provided. Texture indicators are then generated for each texture in the image by applying the a priori information of the tonal coefficients to the filtered images. Several methods for texture segmentation which employ texture indicators are proposed. Finally, examples which illustrate the characteristics of the method are presented.
Master of Science
33

Brink, Anton David. "The selection and evaluation of grey-level thresholds applied to digital images". Thesis, Rhodes University, 1988. http://hdl.handle.net/10962/d1001996.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
34

Lim, Suryani. "Feature extraction, browsing and retrieval of images". Monash University, School of Computing and Information Technology, 2005. http://arrow.monash.edu.au/hdl/1959.1/9677.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Qin, Jianzhao y 覃剑钊. "Scene categorization based on multiple-feature reinforced contextual visual words". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46969779.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Zhu, Shanshan y 朱珊珊. "Using semantic sub-scenes to facilitate scene categorization and understanding". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206459.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This thesis proposes to learn the absent cognitive element in conventional scene categorization methods: sub-scenes, and use them to better categorize and understand scenes. In scene categorization, it has been observed that the problem of ambiguity occurs when treating the scene as a whole. Scene ambiguity arises from when a similar set of sub-scenes are arranged differently to compose different scenes, or when a scene literally contains several categories. However, these ambiguities can be discerned by the knowledge of sub-scenes. Thus, it is worthy to study sub-scenes and use them to better understand a scene. The proposed research firstly considers an unsupervised method to segment sub-scenes. It emphasizes on generating more integral regions instead of over-segmented regions usually produced by conventional segmentation methods. Several properties of sub-scenes are explored such as proximity grouping, area of influence, similarity and harmony based on psychological principles. These properties are formulated into constraints that are used directly in the proposed framework. A self-determined approach is employed to produce a final segmentation result based on the characteristics of each image in an unsupervised manner. The proposed method performs competitively against other state-of-the-art unsupervised segmentation methods with F-measure of 0.55, Covering of 0.51 and VoI of 1.93 in the Berkeley segmentation dataset. In the Stanford background dataset, it achieves the overlapping score of 0.566 which is higher than the score of 0.499 of the comparison method. To segment and label sub-scenes simultaneously, a supervised approach of semantic segmentation is proposed. It is developed based on a Hierarchical Conditional Random Field classification framework. The proposed method integrates contextual information into the model to improve classification performance. Contextual information including global consistency and spatial context are considered in the proposed method. Global consistency is developed based on generalizing the scene by scene types and spatial context takes the spatial relationship into account. The proposed method improves semantic segmentation by boosting more logical class combinations. It achieves the best score in the MSRC-21 dataset with global accuracy at 87% and the average accuracy at 81%, which out-performs all other state-of-the-art methods by 4% individually. In the Stanford background dataset, it achieves global accuracy at 80.5% and average accuracy at 71.8%, also out-performs other methods by 2%. Finally, the proposed research incorporates sub-scenes into the scene categorization framework to improve categorization performance, especially in ambiguity cases. The proposed method encodes the sub-scene in the way that their spatial information is also considered. Sub-scene descriptor compensates the global descriptor of a scene by evaluating local features with specific geometric attributes. The proposed method obtains an average categorization accuracy of 92.26% in the 8 Scene Category dataset, which outperforms all other published methods by over 2% of improvement. It evaluates ambiguity cases more accurately by discerning which part exemplifies a scene category and how those categories are organized.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
37

Yan, Hui. "Data analytics and crawl from hidden web databases". Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335862.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Yang, Yi Yang. "Identifying city landmarks by mining web albums". Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335394.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Xia, Tian. "Hierarchical kernel-based learning algorithms and their applications". Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335405.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Hu, Wei Shu. "Community detection and credibility analysis on social networks". Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335428.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zou, Hai Tao. "Local topology of social networks in supporting recommendations and diversity identification of reviews". Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335434.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Engbrecht, Michael Robert 1955. "PREDICTING EDGE DETECTOR PERFORMANCE". Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276456.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This paper proposes a metric to predict edge detection performance when applied to an image with noise. First, models of edges and edge detection linear operators are characterized by their spatial and Fourier domain properties. Second, additive uncorrelated noise on the operator is examined and a metric is developed using the image formation system modulation transfer function (MTF), expected noise power spectral density, and edge detector characterization as inputs. Thirdly, the problem of partially correlated noise is examined. A separate performance metric for simple thresholded operator outputs is proposed. Finally, several discrete edge detectors in noise are evaluated numerically. Both the metric based on signal to noise detector output, and based on thresholding probabilities were useful in predicting previously published performance results. This was true even for many nonlinear detectors based on the linear detectors evaluated here. The specification of a localization criteria was critical for comparisons between detectors.
43

Rosenthal, Jordan. "Filters and filterbanks for hexagonally sampled signals". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/13347.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Mansouri, Abdol-Reza 1962. "An algorithm for detecting line segments in digital pictures /". Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66185.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

施能強 y Nang-keung Sze. "Image reconstruction with multisensors". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226711.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Mountrakis, Georgios. "Image-Based Change Detection Using An Integrated Spatiotemporal Gazetteer". Fogler Library, University of Maine, 2000. http://www.library.umaine.edu/theses/pdf/Mountrakis2000.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Jarrett, David Ward 1963. "Digital image noise smoothing using high frequency information". Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276599.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
48

Pessoa, Lucio Flavio Cavalcanti. "Nonlinear systems and neural networks with hybrid morphological/rank/linear nodes : optimal design and applications to image processing and pattern recognition". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13519.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Agrawal, Chandra Prakash. "Full-field deformation measurement in wood using digital image processing". Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/43078.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen

A digital image processing system was used to non-destructively measure the full-field deformation on aluminum and wood specimens loaded in compression and bending. The measurement technique consisted of creating a random speckle pattern on the specimen surface, recording images before deformation and after deformation, and computing the relative displacements of small image subsets. Two methods for producing speckle patterns on the specimens were studied: spray paint and adhesive-backed photographic film.

Baseline tests were conducted to evaluate the influence of signal noise on the measurement system. Uniform translation tests were conducted to evaluate the capability of the system for measuring finite motion. the technique was used to monitor the full-field deformation response of aluminum and wood specimens tested in bending and static compression. Moderate duration compression creep tests were conducted, on the wood specimens to investigate the suitability of the system for monitoring the creep response of materials. The results obtained from the two speckle techniques were also. compared. The results showed that for the magnification and speckle patterns tested displacement measurements smaller than 3.29x10-4 inch may be unreliable due to signal noise.


Master of Science
50

McQuade, Patrick John Art College of Fine Arts UNSW. "Visualising the invisible :articulating the inherent features of the digital image". Awarded by:University of New South Wales. Art, 2007. http://handle.unsw.edu.au/1959.4/43307.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Contemporary digital imaging practice has largely adopted the visual characteristics of its closest mediatic relative, the analogue photograph, In this regard, new media theorist Lev Manovich observes that "Computer software does not produce such images by default. The paradox of digital visual culture is that although all imaging is becoming computer-based, the dominance of photographic and cinematic imagery is becoming even stronger. But rather than being a direct, "natural" result of photo and film technology, these images are constructed on computers" (Manovich 2001: 179), Manovich articulates the disjuncture between the technical processes involved in the digital image creation process and the visual characteristics of the final digital image with its replication of the visual qualities of the analogue photograph. This research addresses this notion further by exploring the following. What are the defining technical features of these computer-based imaging processes? Could these technical features be used as a basis in developing an alternative aesthetic for the digital image? Why is there a reticence to visually acknowledge these technical features in contemporary digital imaging practice? Are there historic mediated precedents where the inherent technical features of the medium are visually acknowledged in the production of imagery? If these defining technical features of the digital imaging process were visually acknowledged in this image creation process, what would be the outcome? The studio practice component of the research served as a foundation for the author's artistic and aesthetic development where the intent was to investigate and highlight four technical qualities of the digital image identified through the case studies of three digital artists, and other secondary sources, These technical qualities include: the composite RGB colour system of the digital image as it appears on screen; the pixellated microstructure of the digital image; the luminosity of the digital image as it appears on a computer monitor, and the underlying numeric and (ASCII based) alphanumeric codes of the image file which enables that most defining feature of the image file, that of programmability, Based on research in the visualization of these numeric and alphanumeric codes, digital images of bacteria produced through the use of the scanning electron microscope, were chosen as image content for an experimental body of work to draw the conceptual link between these numeric and alphanumeric codes of the image file and the coded genetic sequence of an individual bacterial entity.

Pasar a la bibliografía