Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Digital signal and image processing.

Дисертації з теми "Digital signal and image processing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Digital signal and image processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ahtaiba, Ahmed Mohamed A. "Restoration of AFM images using digital signal and image processing." Thesis, Liverpool John Moores University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604322.

Повний текст джерела
Анотація:
All atomic force microscope (AFM) images suffer from distortions, which are principally produced by the interaction between the measured sample and the AFM tip. If the three-dimensional shape of the tip is known, the distorted image can be processed and the original surface form ' restored' typically by deconvolution approaches. This restored image gives a better representation of the real 3D surface or the measured sample than the original distorted image. In this thesis, a quantitative investigation of using morphological deconvolution has been used to restore AFM images via computer simulation using various computer simulated tips and objects. This thesis also presents the systematic quantitative study of the blind tip estimation algorithm via computer simulation using various computer simulated tips and objects. This thesis proposes a new method for estimating the impulse response of the AFM by measuring a micro-cylinder with a-priori known dimensions using contact mode AFM. The estimated impulse response is then used to restore subsequent AFM images, when measured with the same tip, under similar measurement conditions. Significantly, an approximation to what corresponds to the impulse response of the AFM can be deduced using this method. The suitability of this novel approach for restoring AFM images has been confirmed using both computer simulation and also with real experimental AFM images. This thesis suggests another new approach (impulse response technique) to estimate the impulse response of the AFM. this time from a square pillar sample that is measured using contact mode AFM. Once the impulse response is known, a deconvolution process is carried out between the estimated impulse response and typical 'distorted' raw AFM images in order to reduce the distortion effects. The experimental results and the computer simulations validate the performance of the proposed approach, in which it illustrates that the AFM image accuracy has been significantly improved. A new approach has been implemented in this research programme for the restoration of AFM images enabling a combination of cantilever and feedback signals at different scanning speeds. In this approach, the AFM topographic image is constructed using values obtained by summing the height image that is used for driving the Z-scanner and the deflection image with a weight function oc that is close to 3. The value of oc has been determined experimentally using tri al and error. This method has been tested 3t ten different scanning speeds and it consistently gives more faithful topographic images than the original AFM images.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Musoke, David. "Digital image processing with the Motorola 56001 digital signal processor." Scholarly Commons, 1992. https://scholarlycommons.pacific.edu/uop_etds/2236.

Повний текст джерела
Анотація:
This report describes the design and testing of the Image56 system, an IBM-AT based system which consists of an analog video board and a digital board. The former contains all analog and video support circuitry to perform real-time image processing functions. The latter is responsible for performing non real-time, complex image processing tasks using a Motorola DSP56001 digital signal processor. It is supported by eight image data buffers and 512K words of DSP memory (see Appendix A for schematic diagram).
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hartley, David Andrew. "Image correlation using digital signal processors." Thesis, Liverpool John Moores University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhu, Yong. "Digital signal and image processing techniques for ultrasonic nondestructive evaluation." Thesis, City University London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336431.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

May, Heather. "Wavelet-based Image Processing." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1448037498.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ansourian, Megeurditch N. "Digital signal processing for the analysis of fetal breathing movements." Thesis, University of Edinburgh, 1989. http://hdl.handle.net/1842/13595.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Silva, Eduardo Antonio Barros da. "Wavelet transforms for image coding." Thesis, University of Essex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282495.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lu, Nan. "Development of new digital signal processing procedures and applications to speech, electromyography and image processing." Thesis, University of Liverpool, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lie, Chin Cheong Patrick. "Iterative algorithms for fast, signal-to-noise ratio insensitive image restoration." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63767.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Shakaff, A. Y. Md. "Practical implementation of the Fermat Number Transform with applications to filtering and image processing." Thesis, University of Newcastle Upon Tyne, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379766.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Arrowood, Joseph Louis Jr. "Theory and application of adaptive filter banks." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15369.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hereford, James McCracken. "Optical implementation of morphological transformations." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/14891.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Jarpa, Pedro F. "Quantifying the differences in low probability of intercept radar waveforms using quadrature mirror filtering." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FJarpa.pdf.

Повний текст джерела
Анотація:
Thesis (M.S.: Electrical Engineer)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Phillip E. Pace, Herschel H. Loomis, Jr. Includes bibliographical references (p. 151-152). Also available online.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Adler, Desmond Christopher 1978. "Digital signal processing techniques for optical coherence tomography : OCT and OCT image enhancement." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28486.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 132-135).
Digital signal processing (DSP) techniques were developed to improve the flexibility, functionality, and image quality of ultrahigh resolution optical coherence tomography (OCT) systems. To reduce the dependence of OCT research systems on fixed analog electronics and to improve overall system flexibility, a digital demodulation scheme implemented entirely in software was developed. This improvement allowed rapid reconfiguration of the OCT imaging speed and source center wavelength without having to construct new analog filters and demodulators. This demodulation scheme produced a highly accurate envelope and was immune to local variations in carrier frequency. To provide an alternative contrast modality to conventional intensity-based OCT imaging, spectroscopic OCT technology was investigated. Preliminary studies on animal models were carried out, with the ultimate goal of enabling the early detection of dysplastic lesions in epithelial tissue through spectroscopic changes not visible with conventional OCT. Various spectral analysis techniques were investigated and evaluated for their ability to provide enhanced contrast of specific tissue types. Areas of concern such as red-shifting of the spectrum with increasing imaging depth, Doppler shifts induced by the optical path length scanner, and determination of an optimal spectroscopic metric were addressed. To improve the quality of ultrahigh resolution OCT images, wavelet processing techniques for speckle noise reduction were investigated. Spatially adaptive wavelet denoising techniques were compared to basic wavelet denoising techniques and time domain filtering. By using a set of image quality metrics, it was possible to quantify the effectiveness of the various filtering methods and determine an optimal
(cont.) process for removing speckle noise while maintaining feature sharpness.
by Desmond Christopher Adler.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Liu, Sam J. "Low bit-rate image and video compression using adaptive segmentation and quantization." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Rosenthal, Jordan. "Filters and filterbanks for hexagonally sampled signals." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/13347.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Katsaggelos, Aggelos Konstantinos. "Constrained iterative image restoration algorithms." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/15830.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hollis, Timothy M. "Circuit and modeling solutions for high-speed chip-to-chip communication /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1721.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kim, Se Hun. "Accuracy-energy tradeoffs in digital image processing using embedded computing platforms." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42881.

Повний текст джерела
Анотація:
As more and more multimedia applications are integrated in mobile devices, a significant amount of energy is devoted to digital signal processing (DSP). Thus, reducing energy consumption for DSP systems has become an important design goal for battery operated mobile devices. Since supply voltage scaling is one of the most effective methods to reduce power/energy consumption, this study examines aggressive voltage scaling to achieve significant energy savings by allowing some output quality degradation for error tolerant image processing system. The objective of proposed research is to explore ultra-low energy image processing system design methodologies based on efficient accuracy (quality)-energy tradeoffs. This dissertation presents several new analyses and techniques to achieve significant energy savings without noticeable quality degradation under aggressive voltage scaling. In the first, this work starts from accurate error analysis and a model based on input sequence dependent delay estimation. Based on the analysis, we explain the dependence of voltage scalability on input image types, which may be used for input dependent adaptive control for optimal accuracy-energy tradeoffs. In addition, this work includes the system-level analysis of the impact of aggressive voltage scaling on overall energy consumption and a low-cost technique to reduce overall energy consumption. Lastly, this research exploits an error concealment technique to improve the efficiency of accuracy-energy tradeoffs. For an image compression system, the technique minimizes the impact of delay errors on output quality while allowing very low voltage operations for significant energy reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Pessoa, Lucio Flavio Cavalcanti. "Nonlinear systems and neural networks with hybrid morphological/rank/linear nodes : optimal design and applications to image processing and pattern recognition." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13519.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yin, Li. "Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5040.

Повний текст джерела
Анотація:
In the study of computer vision, background modeling is a fundamental and critical task in many conventional applications. This thesis presents an introduction to background modeling and various computer vision techniques for estimating the background model to achieve the goal of removing dynamic objects in a video sequence. The process of estimating the background model with temporal changes in the absence of foreground moving objects is called adaptive background modeling. In this thesis, three adaptive background modeling approaches were presented for the purpose of developing \teacher removal" algorithms. First, an adaptive background modeling algorithm based on linear adaptive prediction is presented. Second, an adaptive background modeling algorithm based on statistical dispersion is presented. Third, a novel adaptive background modeling algorithm based on low rank and sparsity constraints is presented. The design and implementation of these algorithms are discussed in detail, and the experimental results produced by each algorithm are presented. Lastly, the results of this research are generalized and potential future research is discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Srinivasan, Sabeshan. "Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures." Fogler Library, University of Maine, 2006. http://www.library.umaine.edu/theses/pdf/SrinivasanSX2006.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lynch, Michael Richard. "Adaptive techniques in signal processing and connectionist models." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/244884.

Повний текст джерела
Анотація:
This thesis covers the development of a series of new methods and the application of adaptive filter theory which are combined to produce a generalised adaptive filter system which may be used to perform such tasks as pattern recognition. Firstly, the relevant background adaptive filter theory is discussed in Chapter 1 and methods and results which are important to the rest of the thesis are derived or referenced. Chapter 2 of this thesis covers the development of a new adaptive algorithm which is designed to give faster convergence than the LMS algorithm but unlike the Recursive Least Squares family of algorithms it does not require storage of a matrix with n2 elements, where n is the number of filter taps. In Chapter 3 a new extension of the LMS adaptive notch filter is derived and applied which gives an adaptive notch filter the ability to lock and track signals of varying pitch without sacrificing notch depth. This application of the LMS filter is of interest as it demonstrates a time varying filter solution to a stationary problem. The LMS filter is next extended to the multidimensional case which allows the application of LMS filters to image processing. The multidimensional filter is then applied to the problem of image registration and this new application of the LMS filter is shown to have significant advantages over current image registration methods. A consideration of the multidimensional LMS filter as a template matcher and pattern recogniser is given. In Chapter 5 a brief review of statistical pattern recognition is given, and in Chapter 6 a review of relevant connectionist models. In Chapter 7 the generalised adaptive filter is derived. This is an adaptive filter with the ability to model non-linear input-output relationships. The Volterra functional analysis of non-linear systems is given and this is combined with adaptive filter methods to give a generalised non-linear adaptive digital filter. This filter is then considered as a linear adaptive filter operating in a non-linearly extended vector space. This new filter is shown to have desirable properties as a pattern recognition system. The performance and properties of the new filter is compared with current connectionist models and results demonstrated in Chapter 8. In Chapter 9 further mathematical analysis of the networks leads to suggested methods to greatly reduce network complexity for a given problem by choosing suitable pattern classification indices and allowing it to define its own internal structure. In Chapter 10 robustness of the network to imperfections in its implementation is considered. Chapter 11 finishes the thesis with some conclusions and suggestions for future work.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Vladimir, Ostojić. "Integrisana multiveličinska obrada radiografskih snimaka." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107425&source=NDLTD&language=en.

Повний текст джерела
Анотація:
Predložena je multiveličinska obrada kojom je moguće objediniti pojačanjevidljivosti detalja kao i poboljšanje kontrasta radiografskih snimaka, kojimse uspostavlja ravnoteža između kontrasta slabo i jasno vidljivih struktura.Pored toga, obradom je obuhvaćeno i smanjenje globalnog kontrasta, čime jemoguće dodatno naglasiti lokalne strukture. Drugim rečima, predložena jemultiveličinska obrada koja integriše sve korake poboljšanja vidljivostianatomskih struktura. Predložena obrada razmotrena je u okviru razvijenogalgoritamskog okvira koji sadrži korake obrade radiografskih snimaka koji supotrebni da bi se sirov signal, dobijen od strane detektora zračenja, obradioi time pripremio za prikazivanje lekarima. Svaki od koraka obrade jeanaliziran i predložena su originalna rešenja, kao i poboljšanja postojećihpristupa. Evaluacijom je pokazano se integrisanom obradom postižu rezultatikoji prevazilaze one koji se dobijaju savremenom vrhunskom obradom, kao i da jecelokupni proces obrade moguće kontrolisati sa samo dva operativnaparametra. Da bi se upotpunila sveobuhvatna analiza procesa obraderadiografskih snimaka, u disertaciji je razmotreno i uklanjanje artefakatanastalih obradom, kao i mogućnost ubrzanja obrade radiografskih snimaka. Zaoba problema su ponuđena originalna rešenja čija je efikasnosteksperimentalno potvrđena.
The thesis focuses on digital radiography image processing. Multi-scale processing isproposed, which unifies detail visibility enhancement, local contrast enhancement andglobal contrast reduction, thus enabling additional amplification of local structures. Inother words, the proposed multi-scale image processing integrates all steps ofanatomical structures visibility enhancement. For the purpose of the proposedanatomical structures visibility enhancement analysis, a processing framework wasdeveloped. The framework consists of several stages, used to process the image fromits raw form (signal obtained from the radiation detector), to the state where it will bepresented to the medical diagnostician. Each stage is analyzed and for each anoriginal solution or an improvement of an existing approach was proposed. Evaluationhas shown that integrated processing provides results which surpass state-of-the-artprocessing methods, and that the entire processing pipeline can be controlled usingjust two parameters. In order to complete the comprehensive analysis of radiographyimage processing, processing artifacts removal and radiography image processingacceleration are analyzed in the thesis. Both issues are addressed through originalsolutions whose efficiency is experimentally confirmed.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tsang, Kwok-hon, and 曾國瀚. "Design of an aperture-domain imaging method and signal acquisition hardware for ultrasound-based vector flow estimation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43572315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Glotzbach, John William. "A Color Filter Array Interpolation Method Based on Sampling Theory." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4785.

Повний текст джерела
Анотація:
Digital cameras use a single image sensor array with a color filter array (CFA) to measure a color image. Instead of measuring a red, green, and blue value at every pixel, these cameras have a filter built onto each pixel so that only one portion of the visible spectrum is measured. To generate a full-color image, the camera must estimate the missing two values at every pixel. This process is known as color filter array interpolation. The Bayer CFA pattern samples the green image on half of the pixels of the imaging sensor on a quincunx grid. The other half of the pixels measure the red and blue images equally on interleaved rectangular sampling grids. This thesis analyzes this problem with sampling theory. The red and blue images are sampled at half the rate of the green image and therefore have a higher probability of aliasing in the output image. This is apparent when simple interpolation algorithms like bilinear interpolation are used for CFA interpolation. Two reference algorithms, a projections onto convex sets (POCS) algorithm and an edge-directed algorithm by Adams and Hamilton (AH), are studied. Both algorithms address aliasing in the green image. Because of the high correlation among the red, green, and blue images, information from the red and blue images can be used to better interpolate the green image. The reference algorithms are studied to learn how this information is used. This leads to two new interpolation algorithms for the green image. The red and blue interpolation algorithm of AH is also studied to determine how the inter-image correlation is used when interpolating these images. This study shows that because the green image is sampled at a higher rate, it retains much of the high-frequency information in the original image. This information is used to estimate aliasing in the red and blue images. We present a general algorithm based on the AH algorithm to interpolate the red and blue images. This algorithm is able to provide results that are on average, better than both reference algorithms, POCS and AH.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wahl, Joel. "Image inpainting using sparse reconstruction methods with applications to the processing of dislocations in digital holography." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63984.

Повний текст джерела
Анотація:
This report is a master thesis, written by an engineering physics and electrical engineering student at Luleå University of Technology.The desires of this project was to remove dislocations from wrapped phase maps using sparse reconstructive methods. Dislocations is an error that can appear in phase maps due to improper filtering or inadequate sampling. Dislocations makes it impossible to correctly unwrap the phasemap.The report contains a mathematical description of a sparse reconstructive method. The sparse reconstructive method is based on KSVDbox which was created by R. Rubinstein and is free for download and use. The KSVDbox is a MATLAB implementation of a dictionary learning algorithm called K-SVD with Orthogonal Matching Pursuit and a sparse reconstructive algorithm. A guide for adapting the toolbox for inpainting is included, with a couple of examples on natural images which supports the suggested adaptation. For experimental purposes a set of simulated wrapped phase maps with and without disloca-tions were created. These simulated phase maps are based on work by P. Picart. The MATLAB implementation that was used to generate these test images can be found in the appendix of this report such that they can easily be generated by anyone who has the interest to do so. Finally the report leads to an outline of five different experiments that was designed to test the KSVDbox for the processing of dislocations. Each one of these experiments uses a different dictionary. These experiments are due to inpainting with, 1. A dictionary based on Discrete Cosine Transform. 2. An adaptive dictionary, where the dictionary learning algorithm has been shown what thearea in the phase map that was damaged by dislocations should look like. 3. An adaptive dictionary, where the dictionary learning algorithm has been allowed to trainon the phase map that with damages. This is done such that areas with dislocations areignored. 4. An adaptive dictionary, where training is done on a separate image that has been designedto contain general phase patterns. 5. An adaptive dictionary, that results from concatenating the dictionaries used in experiment 3 and 4. The first three experiments are complimented with experiments done on a natural image for comparison purposes.The results show that sparse reconstructive methods, when using the scheme used in this work, is unsuitable for processing of dislocations in phase maps. This is most likely because the reconstructive method has difficulties in acquiring a high contrast reconstruction and there is nothing in the algorithm that causes the inpainting from any direction to match with the inpainting from other directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vemulapalli, Smita. "Audio-video based handwritten mathematical content recognition." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45958.

Повний текст джерела
Анотація:
Recognizing handwritten mathematical content is a challenging problem, and more so when such content appears in classroom videos. However, given the fact that in such videos the handwritten text and the accompanying audio refer to the same content, a combination of video and audio based recognizer has the potential to significantly improve the content recognition accuracy. This dissertation, using a combination of video and audio based recognizers, focuses on improving the recognition accuracy associated with handwritten mathematical content in such videos. Our approach makes use of a video recognizer as the primary recognizer and a multi-stage assembly, developed as part of this research, is used to facilitate effective combination with an audio recognizer. Specifically, we address the following challenges related to audio-video based handwritten mathematical content recognition: (1) Video Preprocessing - generates a timestamped sequence of segmented characters from the classroom video in the face of occlusions and shadows caused by the instructor, (2) Ambiguity Detection - determines the subset of input characters that may have been incorrectly recognized by the video based recognizer and forwards this subset for disambiguation, (3) A/V Synchronization - establishes correspondence between the handwritten character and the spoken content, (4) A/V Combination - combines the synchronized outputs from the video and audio based recognizers and generates the final recognized character, and (5) Grammar Assisted A/V Based Mathematical Content Recognition - utilizes a base mathematical speech grammar for both character and structure disambiguation. Experiments conducted using videos recorded in a classroom-like environment demonstrate the significant improvements in recognition accuracy that can be achieved using our techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Boynton, Ansel John. "EARLY WILDFIRE DETECTION USING TEMPORAL FILTERING AND MULTI-BAND INFRARED ANALYSIS." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1048.

Повний текст джерела
Анотація:
Every year wildfires threaten or destroy ecological habitats, man-made infrastructure and people’s lives. Additionally millions of dollars are spent each year trying to prevent and control these fires. Ideally if a wildfire can be detected before it rages out of control it can be extinguished and avoid large scale devastation. Traditional manned fire lookout towers are neither cost effective nor particularly efficient at detecting wildfire. It is proposed that temporal filtering can be used to isolate the signals created at the beginnings of potential wildfires. Temporal filtering can remove any background image and any periodic signals created by the camera movement. Once typical signals are analyzed, digital filters can be designed to pass fire signals while blocking the unwanted signals. The temporal filter passes only fire signals and signals generated by moving objects. These objects can be distinguished from each other by analyzing the objects mid and long wave energy profile. This algorithm is tested on 17 data sources and its results analyzed.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Indra, Isara. "Very low bit rate video coding using adaptive nonuniform sampling and matching pursuit." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/15779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Monnom, Olivier. "Méthodes algorithmiques de traitement d'images en holographie digitale." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210840.

Повний текст джерела
Анотація:
Le présent travail traite de problèmes particuliers en traitement de signal soulevés par l'holographie digitale. Le premier chapitre présente l'holographie et l'holographie digitale. Le deuxième chapitre aborde le problème des artéfacts apparaissant sur le bord des hologrammes lorsque ceux-ci sont propagés numériquement. Le troisième chapitre propose une méthode automatique pour éliminer la phase de fond d'un hologramme. Le quatrième chapitre montre comment améliorer la visibilité des hologrammes en supprimant les objets diffractants. Le cinquième chapitre expose un logiciel d'aide à l'analyse des hologrammes enregistrés dans le temps. On montre la capacité du logiciel à effectuer du tracking d'objets en trois dimensions à posteriori. Le sixième chapitre traite de la reconnaissance de forme dans les hologrammes, le problème de la reconnaissance de particules micrométriques est abordé. Des conclusions et perspectives terminent le travail.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Alfonso, Ovidio. "Digital signal processing implementation for near real-time wavelet transformation system for binary images." FIU Digital Commons, 2000. http://digitalcommons.fiu.edu/etd/1065.

Повний текст джерела
Анотація:
Communication has become an essential function in our civilization. With the increasing demand for communication channels, it is now necessary to find ways to optimize the use of their bandwidth. One way to achieve this is by transforming the information before it is transmitted. This transformation can be performed by several techniques. One of the newest of these techniques is the use of wavelets. Wavelet transformation refers to the act of breaking down a signal into components called details and trends by using small waveforms that have a zero average in the time domain. After this transformation the data can be compressed by discarding the details, transmitting the trends. In the receiving end, the trends are used to reconstruct the image. In this work, the wavelet used for the transformation of an image will be selected from a library of available bases. The accuracy of the reconstruction, after the details are discarded, is dependent on the wavelets chosen from the wavelet basis library. The system developed in this thesis takes a 2-D image and decomposes it using a wavelet bank. A digital signal processor is used to achieve near real-time performance in this transformation task. A contribution of this thesis project is the development of DSP-based test bed for the future development of new real-time wavelet transformation algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Darrington, John Mark. "Real time extraction of ECG fiducial points using shape based detection." University of Western Australia. School of Computer Science and Software Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0152.

Повний текст джерела
Анотація:
The electrocardiograph (ECG) is a common clinical and biomedical research tool used for both diagnostic and prognostic purposes. In recent years computer aided analysis of the ECG has enabled cardiographic patterns to be found which were hitherto not apparent. Many of these analyses rely upon the segmentation of the ECG into separate time delimited waveforms. The instants delimiting these segments are called the
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kassen, Daniel S. "Synthetic Aperture Radar: Rapid Detection of Target Motion in Matlab." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1398.

Повний текст джерела
Анотація:
Synthetic Aperture Radar (SAR) has come into widespread use in several civilian and military applications. The focus of this paper is the military application of imaging point targets captured by an airborne SAR platform. Using the traditional SAR method of determining target motion by analyzing the difference between subsequent images takes a relatively large amount of processing resources. Using methods in this thesis, target motion can be estimated before even a single image is obtained, reducing the amount of time and power used by a significantly large amount. This thesis builds on work done by Brain Zaharri and David So. Brain Zaharri successfully created a SAR simulation that accurately models the airborne SAR system capturing data of a target space using the Range Doppler Algorithm (RDA). David So extended this work by adding functionality to determine target velocity in the range and azimuth directions by processing the intermittent data created by the several steps of Brian Zaharri’s simulation. This thesis shows further extensions of processing the intermittent data using unique methods. The methods in this thesis successfully demonstrate the ability to quickly and accurately estimate target position, velocity, and acceleration without the need for using multiple SAR images. Target motion in the range direction is detected without using any part of the RDA, while the azimuth direction cuts out several steps, including the range compression phase and the range cell migration correction. Removing these unneeded processing steps dramatically decreases target motion data acquisition time. Both Brian Zaharri’s and David So’s work, along with this thesis, are part of the Cal Poly SAR Automatic Target Recognition (ATR) group of projects, which is sponsored by Raytheon Space & Airborne Systems Division. Because U.S. military SAR data remains classified, the Cal Poly SAR ATR projects addresses the need to educate researchers on the processing of SAR data.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wood, Christiaan. "An investigation into multi-spectral tracking." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1319.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Eckstein, Adric. "Development of Robust Correlation Algorithms for Image Velocimetry using Advanced Filtering." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/36338.

Повний текст джерела
Анотація:
Digital Particle Image Velocimetry (DPIV) is a planar measurement technique to measure the velocity within a fluid by correlating the motion of flow tracers over a sequence of images recorded with a camera-laser system. Sophisticated digital processing algorithms are required to provide a high enough accuracy for quantitative DPIV results. This study explores the potential of a variety of cross-correlation filters to improve the accuracy and robustness of the DPIV estimation. These techniques incorporate the use of the Phase Transform (PHAT) Generalized Cross Correlation (GCC) filter applied to the image cross-correlation. The use of spatial windowing is subsequently examined and shown to be ideally suited for the use of phase correlation estimators, due to their invariance to the loss of correlation effects. The Robust Phase Correlation (RPC) estimator is introduced, with the coupled use of the phase correlation and spatial windowing. The RPC estimator additionally incorporates the use of a spectral filter designed from an analytical decomposition of the DPIV Signal-to-Noise Ratio (SNR). This estimator is validated in a variety of artificial image simulations, the JPIV standard image project, and experimental images, which indicate reductions in error on the order of 50% when correlating low SNR images. Two variations of the RPC estimator are also introduced, the Gaussian Transformed Phase Correlation (GTPC): designed to optimize the subpixel interpolation, and the Spectral Phase Correlation (SPC): estimates the image shift directly from the phase content of the correlation. While these estimators are designed for DPIV, the methodology described here provides a universal framework for digital signal correlation analysis, which could be extended to a variety of other systems.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Fonseca, Roberto Nery da. "Algoritmos para avaliação da qualidade de vídeo em sistemas de televisão digital." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-28042009-170527/.

Повний текст джерела
Анотація:
Nesta dissertação é abordado o tema da avaliação de qualidade em sinais de vídeo, especificamente da avaliação objetiva completamente referenciada de sinais de vídeo em definição padrão. A forma mais confiável de se medir a diferença de qualidade entre duas cenas de vídeo é utilizando um painel formado por telespectadores, resultando em uma medida subjetiva da diferença de qualidade. Esta metodologia demanda um longo período de tempo e um elevado custo operacional, o que a torna pouco prática para utilização. Neste trabalho são apresentados os aspectos relevantes do sistema visual humano, das metodologias para avaliação de vídeo em aplicações de televisão digital em definição padrão e também da validação destas metodologias. O objetivo desta dissertação é testar métricas de baixo custo computacional como a que avalia a relação sinal-ruído de pico (PSNR: Peak Signal-to-Noise Ratio), a que mede similaridade estrutural (SSIM: Structural SIMilarity) e a que mede diferenças em três componentes de cor definidas pela CIE (Commission Internationale de l\'Eclairage), representadas por L*, a* e b* em uma dada extensão espacial (S-CIELAB: Spatial-CIELAB). Uma metodologia de validação destas métricas é apresentada, tendo como base as cenas e resultados dos testes subjetivos efetuados pelo Grupo de Especialistas em Qualidade de Vídeo (VQEG: Video Quality Expert Group). A estas métricas é introduzida uma etapa de preparação das cenas, na qual são efetuadas equalização de brilho, suavização de detalhes e detecção de contornos. Controlando-se a intensidade destes filtros, um novo conjunto de medidas é obtido. Comparações de desempenho são realizadas entre estes novos conjuntos de medidas e o conjunto de medidas obtido pelo VQEG. Os resultados mostram que para aplicações em televisão digital de definição padrão, a avaliação utilizando componentes de cor pouco influencia na correlação com as medidas obtidas nos testes subjetivos. Por outro lado, foi verificado que a aplicação adequada de técnicas para suavização de imagens, combinadas com métricas de fácil implementação como a SSIM, elevam seu grau de correlação com medidas subjetivas. Também foi demonstrado que técnicas para extração de contornos, combinadas com a métrica PSNR, podem aumentar significativamente seu desempenho em termos de correlação com os testes efetuados pelo VQEG. À luz destes resultados, foi concluído que medidas objetivas de fácil implementação do ponto de vista computacional podem ser usadas para comparação da qualidade de sinais de vídeo SDTV, desde que devidamente combinadas com técnicas para adequação ao sistema visual humano como a suavização e extração de contornos.
This research is about the video signal quality comparison issue, focusing at full reference metrics using standard definition television. The most reliable way to predict the differences in terms of quality between two video scenes is using a panel of television viewers, under controlled psychometric experimental conditions, resulting in statistical meaningful Differences in Mean Opinion Score (DMOS). The Subjective assessment is both time consuming and costly, therefore with practical limitations. The ideal substitute are objective quality assessment algorithms, whose scores have been shown to correlate highly with the results of DMOS. The goal for this research is to optimize the performance of simple metrics combining it with digital image processing. First this work presents many relevant aspects of the human visual system, methodologies for video evaluation in digital television applications using standard definition (SDTV) and also a validation methodology of these methods. After that, the main goal is to test three very simple metrics in terms of computational cost: PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural SIMilarity) and S-CIELAB (Spatial-CIELAB). original metrics were modified in order to improve their correlations against subjective assessment data. Several experiments combining the advantages of digital image filters for softness and edge extraction have been accomplished within this work. The results show that such simple metrics combined with digital image processing for edge extraction, for example, do improve their correlations with subjective assessment.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Brogueira, Gaspar Manuel Rocha. "Processamento de imagem digital para o controlo de qualidade do queijo regional de Évora." Master's thesis, Universidade de Évora, 2010. http://hdl.handle.net/10174/19399.

Повний текст джерела
Анотація:
A utilização generalizada do computador para a automatização das mais diversas tarefas, tem conduzido ao desenvolvimento de aplicações que possibilitam a realização de actividades que até então poderiam não só ser demoradas, como estar sujeitas a erros inerentes à actividade humana. A investigação desenvolvida no âmbito desta tese, tem como objectivo o desenvolvimento de um software e algoritmos que permitam a avaliação e classificação de queijos produzidos na região de Évora, através do processamento de imagens digitais. No decurso desta investigação, foram desenvolvidos algoritmos e metodologias que permitem a identificação dos olhos e dimensões do queijo, a presença de textura na parte exterior do queijo, assim como características relativas à cor do mesmo, permitindo que com base nestes parâmetros possa ser efectuada uma classificação e avaliação do queijo. A aplicação de software, resultou num produto de simples utilização. As fotografias devem respeitar algumas regras simples, sobre as quais se efectuará o processamento e classificação do queijo. ABSTRACT: The widespread use of computers for the automation of repetitive tasks, has resulted in developing applications that allow a range of activities, that until now could not only be time consuming and also subject to errors inherent to human activity, to be performed without or with little human intervention. The research carried out within this thesis, aims to develop a software application and algorithms that enable the assessment and classification of cheeses produced in the region of Évora, by digital images processing. Throughout this research, algorithms and methodologies have been developed that allow the identification of the cheese eyes, the dimensions of the cheese, the presence of texture on the outside of cheese, as well as an analysis of the color, so that, based on these parameters, a classification and evaluation of the cheese can be conducted. The developed software application, is product simple to use, requiring no special computer knowledge. Requires only the acquisition of the photographs following a simple set of rules, based on which it will do the processing and classification of cheese.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Koenigkan, Luciano Vieira. "Método de análise do contorno de aglomerados de gotas de chuva artificial em imagem digital." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-21032006-181214/.

Повний текст джерела
Анотація:
Este trabalho apresenta um método para análise do contorno de gotas de chuva artificial em imagem digital, o qual se caracteriza como uma ferramenta para melhor compreensão dos processos agrícolas que envolvem o uso de chuvas artificiais, como a irrigação e a aplicação de defensivos, sendo desenvolvido com o uso de técnicas para análise de formas bidimensionais e processamento de sinais, como representação de formas por contornos paramétricos, análise de Fourier e filtragem gaussiana. Os resultados obtidos demonstram precisão na análise de imagens de aglomerados de gotas, acrescentando as características de descritor apresentadas pela curvatura, assim como a flexibilidade de calibração oferecidas pela abordagem multi-escala adotada, possibilitando a obtenção de erros de medida não maiores que 5%, para os padrões circulares testados com raios entre 10 e 200 pixels
This work presents a method for contour analysis of artificial rain drop based on both digital image and curvature processing. The method is characterized as a tool, which allows a better understanding of the raindrops in irrigation and agrochemicals spraying processes. Its development was based on parametric contours representation of shapes, Fourier analysis, and Gaussian filtering. Results show the suitability of the method, which presents errors smaller than 5% for curvature determination in the range of the radius variation in betwen 10 and 200 pixels as well as the ability for raindrop clusters analysis
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Petrásek, Daniel. "Zpracování termálních obrazů technikou superresolution." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220622.

Повний текст джерела
Анотація:
Thesis deals with problematic of raising digital image spacial resolution, mainly thermal image. There are mentioned methods of interpolation, panorama and super-resolution. Main topic of this thesis is super-resolution which is detailly described during the thesis. Finally there is a description of algorithm implementation and problems that may occur during the implemetation.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Pous, Nicolas. "Analyse de signaux analogiques/radiofréquences à l'aide de ressources digitales en vue du test." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00667202.

Повний текст джерела
Анотація:
Les travaux présentés dans ce mémoire entrent dans le cadre de la réduction des coûts de production des circuits RF. En effet, la démocratisation de ces appareils pousse les industriels à chercher de nouvelles solutions afin de produire, à bas prix, ces circuits RF. Le test représentant une large partie du coût de ces produits, l'objectif de cette thèse a donc été de proposer une stratégie originale permettant d'effectuer l'analyse de signaux modulés à l'aide d'un équipement de test numérique faible coût. Tout d'abord, le manuscrit dresse un tableau général du test industriel et présente un éventail des solutions proposées dans la littérature pour réduire le coût du test des circuits RF, ainsi que des exemples d'utilisation du concept de " level-crossing ", méthode choisie pour effectuer la capture puis la reconstruction des signaux analogiques et RF. Les principes de base utilisés pour la reconstruction de signaux analogiques à partir d'informations temporelles sont ensuite abordés. L'élément clé des algorithmes de reconstruction est la détermination des instants de passage du signal par un seuil de tension prédéterminé. De cette information, il est ensuite possible de déterminer la phase, la fréquence et l'amplitude du signal observé. La suite est consacrée à l'analyse de signaux modulés. Dans un premier temps sur des modulations élémentaires, puis sur des schémas de modulations plus complexes, basés sur des cas d'étude concrets. Le travail se termine en abordant la prise en compte des non-idéalités de la chaîne d'acquisition. Il s'agit en particulier d'étudier l'impact de ces non-idéalités pour élaborer des algorithmes visant à compenser les erreurs résultantes.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Nordick, Brent C. "Dynamic Element Matching Techniques For Delta-Sigma ADCs With Large Internal Quantizers." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd466.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Majd, Farjam. "Two new parallel processors for real time classification of 3-D moving objects and quad tree generation." PDXScholar, 1985. https://pdxscholar.library.pdx.edu/open_access_etds/3421.

Повний текст джерела
Анотація:
Two related image processing problems are addressed in this thesis. First, the problem of identification of 3-D objects in real time is explored. An algorithm to solve this problem and a hardware system for parallel implementation of this algorithm are proposed. The classification scheme is based on the "Invariant Numerical Shape Modeling" (INSM) algorithm originally developed for 2-D pattern recognition such as alphanumeric characters. This algorithm is then extended to 3-D and is used for general 3-D object identification. The hardware system is an SIMD parallel processor, designed in bit slice fashion for expandability. It consists of a library of images coded according to the 3-D INSM algorithm and the SIMD classifier which compares the code of the unknown image to the library codes in a single clock pulse to establish its identity. The output of this system consists of three signals: U, for unique identification; M, for multiple identification; and N, for non-identification of the object. Second, the problem of real time image compaction is addressed. The quad tree data structure is described. Based on this structure, a parallel processor with a tree architecture is developed which is independent of the data entry process, i.e., data may be entered pixel by pixel or all at once. The hardware consists of a tree processor containing a tree generator and three separate memory arrays, a data transfer processor, and a main memory unit. The tree generator generates the quad tree of the input image in tabular form, using the memory arrays in the tree processor for storage of the table. This table can hold one picture frame at a given time. Hence, for processing multiple picture frames the data transfer processor is used to transfer their respective quad trees from the tree processor memory to the main memory. An algorithm is developed to facilitate the determination of the connections in the circuit.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Irie, Kenji. "Noise-limited scene-change detection in images." Diss., Lincoln University, 2009. http://hdl.handle.net/10182/1351.

Повний текст джерела
Анотація:
This thesis describes the theoretical, experimental, and practical aspects of a noise-limited method for scene-change detection in images. The research is divided into three sections: noise analysis and modelling, dual illumination scene-change modelling, and integration of noise into the scene-change model. The sources of noise within commercially available digital cameras are described, with a new model for image noise derived for charge-coupled device (CCD) cameras. The model is validated experimentally through the development of techniques that allow the individual noise components to be measured from the analysis of output images alone. A generic model for complementary metal-oxide-semiconductor (CMOS) cameras is also derived. Methods for the analysis of spatial (inter-pixel) and temporal (intra-pixel) noise are developed. These are used subsequently to investigate the effects of environmental temperature on camera noise. Based on the cameras tested, the results show that the CCD camera noise response to variation in environmental temperature is complex whereas the CMOS camera response simply increases monotonically. A new concept for scene-change detection is proposed based upon a dual illumination concept where both direct and ambient illumination sources are present in an environment, such as that which occurs in natural outdoor scenes with direct sunlight and ambient skylight. The transition of pixel colour from the combined direct and ambient illuminants to the ambient illuminant only is modelled. A method for shadow-free scene-change is then developed that predicts a pixel's colour when the area in the scene is subjected to ambient illumination only, allowing pixel change to be distinguished as either being due to a cast shadow or due to a genuine change in the scene. Experiments on images captured in controlled lighting demonstrate 91% of scene-change and 83% of cast shadows are correctly determined from analysis of pixel colour change alone. A statistical method for detecting shadow-free scene-change is developed. This is achieved by bounding the dual illumination model by the confidence interval associated with the pixel's noise. Three benefits arise from the integration of noise into the scene-change detection method: - The necessity for pre-filtering images for noise is removed; - All empirical thresholds are removed; and - Performance is improved. The noise-limited scene-change detection algorithm correctly classifies 93% of scene-change and 87% of cast shadows from pixel colour change alone. When simple post-analysis size-filtering is applied both these figures increase to 95%.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Saraiva, Aratã Andrade. "Detecção do complexo QRS através de morfologia matemática multiescalar." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/437.

Повний текст джерела
Анотація:
Este trabalho apresenta a morfologia matemática multiescalar com quatro escalas aplicada no sinal de eletrocardiografia para a detecção do complexo QRS. Para o desenvolvimento deste trabalho pluridisciplinar de Engenharia Biomédica foram utilizados conhecimentos de Cardiologia, Eletrocardiografia, Bioestatística, Processamento Digital de Sinais Biomédicos, Teoria de Detecção de Sinais, Análise ROC e Índices de Desempenho de Classificadores, interagindo com áreas da edicina, da Estatística, da Matemática, da Engenharia da Computação e da Engenharia Elétrica. Testes foram realizados com o banco de sinais de ECG do MIT/BIH. O desempenho do método foi avaliado através da curva ROC e do índice DER. Os resultados foram comparados com a morfologia multiescalar de uma, duas e de três escalas. Nos resultados foi observado que o método de morfologia multiescalar com quatro escalas, aplicado nas condições estabelecidas, apresentou índices melhores de detecção do complexo QRS, confirmando seu potencial no processamento de sinais biomédicos, dando suporte na manipulação do complexo QRS e oferecendo melhorias na detecção.
This work presents the multiscale mathematical morphology with four scales applied in the ECG signal for detection of the QRS complex. To develop this Biomedical Engineering multidisciplinary work were used knowledge of Cardiology, Electrocardiography, Biostatistics, Biomedical Digital Signal Processing, Signal Detection Theory, ROC Analysis and Performance Classifiers Indexes, interacting with areas of Medicine, Statistics, Mathematics, Computer Engineering and Electrical Engineering. The tests were realized with MIT/BIH ECG signals database. The performance of the method was evaluated using ROC curves and the DER index. The results were compared with the multiscale mathematical morphology of one, two and three scales. In the results was observed that the multiscale mathematical morphology with four scales presented better indexes detection of the QRS complex, confirming its potential in biomedical signal processing, supporting the handling of the QRS complex and offering improvements in detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Bechar, Hassane. "Comparaison d'images : Application à la surveillance et au suivi de trajectoire." Nancy 1, 1987. http://www.theses.fr/1987NAN10062.

Повний текст джерела
Анотація:
L'étude se base sur la comparaison d'une image stockée en mémoire avec celle provenant de la caméra. L'étude s'est effectuée à l'aide d'un dispositif cablé qui donne un résultat visuel et un résultat numérique en temps réel. Un logiciel permet d'obtenir les mêmes résultats mais en temps différé
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Modahl, Ylva, and Caroline Skoglund. "Lokalisering av brunnar i ELISpot." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254257.

Повний текст джерела
Анотація:
Health is a fundamental human right. To increase global health, research in the medical sector is of great importance. Decreasing time consumption of biomedical testing could accelerate the research and development of new drugs and vaccines. This could be achieved by automation of biomedical analysis, using computerized methods. In order to perform analysis on pictures of biomedical tests, it is important to identify the area of interest (AOI) of the test. For example, cells and bacteria are commonly grown in petri dishes, in this case the AOI is the bottom area of the dish, since this is where the object of analysis is located.This study was performed with the aim to compare a few computerized methods for identifying the AOI in pictures of biomedical tests. In the study, biomedical images from a testing method called ELISpot have been used. ELISpot uses plates with up to 96 circular wells, where pictures of the separate wells were used in order to find the AOI corresponding to the bottom area of each well. The focus has been on comparing the performance of three edge detection methods. More specifically, their ability to accurately detect the edges of the well. Furthermore, a method for identifying a circle based on the detected edges was used to specify the AOI.The study shows that methods using second order derivatives for edge detection, gives the best results regarding to robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Филатова, Анна Евгеньевна. "Метод структурной идентификации биомедицинских изображений с локально сосредоточенными признаками". Thesis, Политехпериодика, 2014. http://repository.kpi.kharkov.ua/handle/KhPI-Press/46308.

Повний текст джерела
Анотація:
Рассматривается научно-техническая проблема синтеза интеллектуальных систем поддержки принятия решений при обработке биомедицинских сигналов и изображений. В работе используются методы цифровой обработки сигналов и изображений, многокритериальные оценки, методы выделения новых знаний с целью повышения качества принятия решений в условиях априорной неопределенности.
Scientific and technical problem of synthesis of intelligent decision support systems in biomedical signals and images processing is considered. Methods of digital signal and image processing, multi-criteria estimation, methods for detection of new knowledge to improve the quality of decision making under prior uncertainty are used in the work.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Beltrão, Gabriel Tedgue. "Rápida predição da direção do bloco para aplicação com transformadas direcionais." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260075.

Повний текст джерела
Анотація:
Orientadores: Yuzo Iano, Rangel Arthur
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T22:39:06Z (GMT). No. of bitstreams: 1 Beltrao_GabrielTedgue_M.pdf: 7074938 bytes, checksum: 0a2d464733f2fb5dcc14430cc1844758 (MD5) Previous issue date: 2012
Resumo: As transformadas derivadas da DCT são amplamente utilizadas para compressão de vídeo. Recentemente, muitos autores têm destacado que os resíduos de predição normalmente apresentam estruturas direcionais que não podem ser eficientemente representadas pela DCT convencional. Nesse contexto, muitas transformadas direcionais têm sido propostas como forma de suplantar a deficiência da DCT em lidar com tais estruturas. Apesar do desempenho superior das transformadas direcionais sobre a DCT convencional, para a sua aplicação na compressão de vídeo é necessário avaliar o aumento no tempo de codificação e a complexidade para sua implementação. Este trabalho propõe um rápido algoritmo para se estimar as direções existentes em um bloco antes da aplicação das transformadas direcionais. O codificador identifica as direções predominantes em cada bloco e aplica apenas a transformada referente àquela direção. O algoritmo pode ser usado em conjunto com qualquer proposta de transformadas direcionais que utilize a técnica de otimização por taxa-distorção (RDO) para a seleção da direção a ser explorada, reduzindo a complexidade de implementação a níveis similares a quando apenas a DCT convencional é utilizada
Abstract: DCT-based transforms are widely adopted for video compression. Recently, many authors have highlighted that prediction residuals usually have directional structures that cannot be efficiently represented by conventional DCT. In this context, many directional transforms have been proposed as a way to overcome DCT's deficiency in dealing with such structures. Although directional transforms have superior performance over the conventional DCT, for application in video compression it is necessary to evaluate increase in coding time and complexity for its implementation. This work proposes a fast algorithm for estimating blocks directions before applying directional transforms. The encoder identifies predominant directions in each block, and only applies the transform referent to that direction. The algorithm can be used in conjunction with any proposed algorithm for directional transforms that uses the rate-distortion optimization (RDO) process for selection of the direction to be explored; reducing implementation complexity to similar levels when only conventional DCT is used
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Liu, Ming. "Analyse et optimisation du système asiatique de diffusion terrestre et mobile de la télévision numérique." Phd thesis, INSA de Rennes, 2011. http://tel.archives-ouvertes.fr/tel-00662247.

Повний текст джерела
Анотація:
Cette thèse a pour objectif l'analyse du système de télévision numérique chinois (DTMB) et l'optimisation de sa fonction d'estimation de canal. Tout d'abord, une analyse approfondie de ce système est effectuée en comparaison du système DVB-T en termes de spécifications, d'efficacité spectrale et de performances. Ensuite, la fonction d'estimation de canal basée sur la séquence pseudo-aléatoire du système est étudiée dans les domaines temporel et fréquentiel, et plusieurs améliorations sont apportées aux méthodes typiques afin de notamment gérer les canaux très dispersifs en temps. Enfin, de nouveaux procédés itératifs aidés par les données et peu complexes sont proposés pour raffiner les estimés de canal. Les fonctions de décodage de canal et d'entrelacement sont exclues de la boucle et des fonctions de filtrage temps/fréquence sont étudiées pour fiabiliser les estimations. Ces nouveaux algorithmes démontrent leur efficacité par rapport aux méthodes courantes de la littérature.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії