Siga este enlace para ver otros tipos de publicaciones sobre el tema: Mellin and scale transform.

Tesis sobre el tema "Mellin and scale transform"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Mellin and scale transform".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Pratt, John Graham le Maistre. "Application of the Fourier-Mellin transform to translation-, rotation- and scale-invariant plant leaf identification". Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33440.

Texto completo
Resumen
The Fourier-Mellin transform was implemented on a digital computer and applied towards the recognition and differentiation of images of plant leaves regardless of translation, rotation or scale. Translated, rotated and scaled leaf images from seven species of plants were compared: avocado ( Persea americana), trembling aspen (Populus tremuloides), lamb's-quarter (Chenopodium album), linden (Tilla americana), silver maple (Acer saccharinum), plantain (Plantago major) and sumac leaflets (Rhus typhina ). The rate of recognition was high among translated and rotated leaf images for all plant species. The rates of recognition and differentiation were poor, however, among scaled leaf images and between leaves of different species. Improvements to increase the effectiveness of the algorithm are suggested.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hutchison, Luke Alexander Daysh. "Fast Registration of Tabular Document Images Using the Fourier-Mellin Transform". BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/4.

Texto completo
Resumen
Image registration, the process of finding the transformation that best maps one image to another, is an important tool in document image processing. Having properly-aligned microfilm images can help in manual and automated content extraction, zoning, and batch compression of images. An image registration algorithm is presented that quickly identifies the global affine transformation (rotation, scale, translation and/or shear) that maps one tabular document image to another, using the Fourier-Mellin Transform. Each component of the affine transform is recovered independantly from the others, dramatically reducing the parameter space of the problem, and improving upon standard Fourier-Mellin Image Registration (FMIR), which only directly separates translation from the other components. FMIR is also extended to handle shear, as well as different scale factors for each document axis. This registration method deals with all transform components in a uniform way, by working in the frequency domain. Registration is limited to foreground pixels (the document form and printed text) through the introduction of a novel, locally adaptive foreground-background segmentation algorithm, based on the median filter. The background removal algorithm is also demonstrated as a useful tool to remove ambient signal noise during correlation. Common problems with FMIR are eliminated by background removal, meaning that apodization (tapering down to zero at the edge of the image) is not needed for accurate recovery of the rotation parameter, allowing the entire image to be used for registration. An effective new optimization to the median filter is presented. Rotation and scale parameter detection is less susceptible to problems arising from the non-commutativity of rotation and "tiling" (periodicity) than for standard FMIR, because only the regions of the frequency domain directly corresponding to tabular features are used in registration. An original method is also presented for automatically obtaining blank document templates from a set of registered document images, by computing the "pointwise median" of a set of registered documents. Finally, registration is demonstrated as an effective tool for predictive image compression. The presented registration algorithm is reliable and robust, and handles a wider range of transformation types than most document image registration systems (which typically only perform deskewing).
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Erturk, Alp. "Rotation, Scale And Translation Invariant Automatic Target Recognition Using Template Matching For Satellite Imagery". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611434/index.pdf.

Texto completo
Resumen
In this thesis, rotation, scale and translation (RST) invariant automatic target recognition (ATR) for satellite imagery is presented. Template matching is used to realize the target recognition. However, unlike most of the studies of template matching in the literature, RST invariance is required in our problem, since most of the time we will have only a small number of templates of each target, while the targets to be recognized in the scenes will have various orientations, scaling and translations. RST invariance is studied in detail and implemented with some of the competing methods in the literature, such as Fourier-Mellin transform and bipectrum combined with log-polar mapping. Phase correlation and normalized cross-correlation are used as similarity metrics. Encountered drawbacks were overcome with additional operations and modifications of the algorithms. ATR using reconstruction of the target image with respect to the template, based on bispectrum, log-polar mapping and phase correlation outperformed the other methods and successful recognition was realized for various target types, especially for targets on relatively simpler backgrounds, i.e. containing little or no other objects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Cheung, Yun Kuen. "Analysis of weighted digital sums by mellin transform /". View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?MATH%202009%20CHEUNG.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Spratt, W. J. "A classical and distributed theory of Mellin multiplier transforms". Thesis, University of Strathclyde, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371980.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hutchison, Luke A. D. "Fast registration of tabular document images using the Fourier-Mellin Transform /". Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd404.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Balčiūnas, Aidas. "Mellin transforms of Dirichlet L-functions". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112534-52265.

Texto completo
Resumen
In the thesis moromorphic continuation of modified Mellin transforms of Dirichlet L-functions to the whole complex plane have been obtained.
Disertacijoje gauta modifikuotosios Melino transformacijos L- funkcijai meromorfinis pratęsimas į visą kompleksinę plokštumą.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Serretta, Hyram. "Investigation of ship target recognition using neural networks in conjunction with the Fourier Mellin transform". Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/22040.

Texto completo
Resumen
The purpose of this dissertation is to investigate the feasibility of using neural networks in conjunction with the Fourier Modified Direct Mellin Transform (FMDMT) for the recognition of ship targets. The FMDMT is a modification of the Direct Mellin Transform for digital implementations, and is applied to the magnitudes of the Discrete Fourier Transforms (DFT) of range profiles of ships. Necessity for the use of the FMDMT is corroborated by the fact that features can be extracted from the range profiles of targets, regardless of target aspect angle. Variation in aspect angle results in variation of the independent variable. Feature extraction is made possible by the scale invariant properties of the Mellin Transform. Substantial emphasis was placed on preprocessing techniques applied in the implementation of the FMDMT on simulated range profiles and in particular, real ship profiles. The FMDMT was thus examined extensively and utilised as it was developed and demonstrated in [20]. At the completion of this examination, the recognition procedures and methods were applied on simulated data with the aid of a radar simulator developed and adapted for this dissertation. Results of the recognition of simulated ship targets were scrutinized closely and recorded. Employment of this procedure afforded the ability to compare the recognition results for real ship data with those of simulated ship data at a later stage. Acquisition of a large database of ship profiles was made successful by a ship target data capture plan implemented at the Institute for Maritime Technology (IMT) in Simon's Town. The database included the radar range profile data for the SAS Protea and the Outeniqua, which carried out several successful full circular manoeuvres in the line of sight of the search radar utilised (Raytheon). The relevant ships performed these circular manoeuvres in order that the acquired data incorporate radar range profiles of the relevant ships at most aspect angles from 0 degrees to 360 degrees. Extensive and thorough testing of the performance of the FMDMT would thus be possible since every possible aspect angle would be scrutinized. Preprocessing of data and recognition of targets was implemented in exactly the same manner and order as was the case with the simulated ship data. Extensive examination of the FMDMT revealed that the MDMT should only be applied to one side of a real and even Fourier Transform of a ship target. Literature on the FMDMT had failed to elaborate on this point. Comparison of the recognition results for real and simulated data, indicates a great similarity in success, thus validating the methods and procedures described theoretically and adopted practically for preprocessing of the radar range profiles and recognition of the targets. In order to demonstrate the feasibility of ship target recognition using the procedures and methods incorporated in the dissertation, real ship data for an entire range of different ships should be acquired in the same manner as indicated above. Bibliography: pages 117-118.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nazaikinskii, Vladimir, Bert-Wolfgang Schulze, Boris Sternin y Victor Shatalov. "Quantization of symplectic transformations on manifolds with conical singularities". Universität Potsdam, 1997. http://opus.kobv.de/ubp/volltexte/2008/2508/.

Texto completo
Resumen
The structure of symplectic (canonical) transformations on manifolds with conical singularities is established. The operators associated with these transformations are defined in the weight spaces and their properties investigated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tunaitytė, Ingrida. "Rymano dzeta funkcijos Melino transformacija kritinėje tiesėje". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100903_125540-17250.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ghaderi, Hazhar. "The Rare Decay of the Neutral Pion into a Dielectron". Thesis, Uppsala universitet, Kärnfysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-211683.

Texto completo
Resumen
We give a rather self-contained introduction to the rare pion to dielectron decay which in nontrivial leading order is given by a QED triangle loop. We work within the dispersive framework where the imaginary part of the amplitude is obtained via the Cutkosky rules. We derive these rules in detail. Using the twofold Mellin-Barnes representation for the pion transition form factor, we derive a simple expression for the branching ratio B(π0  e+e-) which we then test for various models. In particular a more recent form factor derived from a Lagrangian for light pseudoscalars and vector mesons inspired by effective field theories. Comparison with the KTeV experiment at Fermilab is made and we find that we are more than 3σ below the KTeV experiment for some of the form factors. This is in agreement with other theoretical models, such as the Vector Meson Dominance model and the quark-loop model within the constituent-quark framework. But we also find that we can be in agreement with KTeV if we explore some freedom of the form factor not fixed by the low-energy Lagrangian.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Schulze, Bert-Wolfgang, Boris Sternin y Victor Shatalov. "On the index of differential operators on manifolds with conical singularities". Universität Potsdam, 1997. http://opus.kobv.de/ubp/volltexte/2008/2496/.

Texto completo
Resumen
The paper contains the proof of the index formula for manifolds with conical points. For operators subject to an additional condition of spectral symmetry, the index is expressed as the sum of multiplicities of spectral points of the conormal symbol (indicial family) and the integral from the Atiyah-Singer form over the smooth part of the manifold. The obtained formula is illustrated by the example of the Euler operator on a two-dimensional manifold with conical singular point.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Pando, Jesus 1956. "Statistical analysis of large scale structure by the discrete wavelet transform". Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/289034.

Texto completo
Resumen
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on "academic" signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h⁻¹ Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cs Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Li, Junhua 1970. "Scale analysis in remote sensing based on wavelet transform and multifractal modeling". Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82916.

Texto completo
Resumen
With the development of Geographical Information System (GIS) and remote sensing techniques, a great deal of data has provided a set of continuous samples of the earth surface from local, regional to global scales. Several multi-scale, multi-resolution, pyramid or hierarchical methods and statistical methods have been developed and used to investigate the scaling property of remotely sensed data: local variance, texture method, scale variance, semivariogram, and fractal analysis. This research introduces the wavelet transform into the realm of scale study in remote sensing and answers three research questions. Three specific objectives corresponding to the three research questions are answered. They include: (1) exploration of wavelets for scale-dependent analysis of remotely sensed imagery; (2) examination of the relationships between wavelet coefficients and classification accuracy for different resolutions and their improvement of classification accuracy; and (3) multiscaling analysis and stochastic down-scaling of an image by using the wavelet transform and multifractals. The significant results obtained are: (1) Haar wavelets can be used to investigate the scale-dependent and spatial structure of an image and provides another method for selection of optimal sampling size; (2) there is a good relationship between classification accuracy and wavelet coefficients. High/low wavelet coefficient reflects low/high classification accuracy in each land cover type. (3) the maximum likelihood classifier with inclusion of wavelet coefficients can improve land cover classification accuracies. (4) the moment-scale analysis of wavelet coefficients can be used to investigate the multifractal properties of an image. Also the stochastic down-scaling model developed based on wavelet and multifractal generates good simulation results of the fine resolution image.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Erhart, Kevin. "EFFICIENT LARGE SCALE TRANSIENT HEAT CONDUCTION ANALYSIS USING A PARALLELIZED BOUNDARY ELEMENT METHOD". Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2973.

Texto completo
Resumen
A parallel domain decomposition Laplace transform Boundary Element Method, BEM, algorithm for the solution of large-scale transient heat conduction problems will be developed. This is accomplished by building on previous work by the author and including several new additions (most note-worthy is the extension to 3-D) aimed at extending the scope and improving the efficiency of this technique for large-scale problems. A Laplace transform method is utilized to avoid time marching and a Proper Orthogonal Decomposition, POD, interpolation scheme is used to improve the efficiency of the numerical Laplace inversion process. A detailed analysis of the Stehfest Transform (numerical Laplace inversion) is performed to help optimize the procedure for heat transfer problems. A domain decomposition process is described in detail and is used to significantly reduce the size of any single problem for the BEM, which greatly reduces the storage and computational burden of the BEM. The procedure is readily implemented in parallel and renders the BEM applicable to large-scale transient conduction problems on even modest computational platforms. A major benefit of the Laplace space approach described herein, is that it readily allows adaptation and integration of traditional BEM codes, as the resulting governing equations are time independent. This work includes the adaptation of two such traditional BEM codes for steady-state heat conduction, in both two and three dimensions. Verification and validation example problems are presented which show the accuracy and efficiency of the techniques. Additionally, comparisons to commercial Finite Volume Method results are shown to further prove the effectiveness.
M.S.M.E.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Mechanical Engineering
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Munsch, Marc. "Moments des fonctions thêta". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4093/document.

Texto completo
Resumen
On s’intéresse dans cette thèse à l’étude des fonctions thêta intervenant dans la preuve de l’équation fonctionnelle des fonctions L de Dirichlet. En particulier, on adapte certains résultats obtenus dans le cadre des fonctions L au cas des fonctions thêta. S. Chowla a conjecturé que les fonctions L de Dirichlet associées à des caractères χ primitifs ne doivent pas s’annuler au point central de leur équation fonctionnelle. De façon analogue, il est conjecturé que les fonctions thêta ne s'annulent pas au point 1. Dans le but de prouver cette conjecture pour beaucoup de caractères, on étudie les moments de fonctions thêta dans plusieurs familles. On se focalise sur deux familles importantes. La première considérée est l’ensemble des caractères de Dirichlet modulo p où p est un nombre premier. On prouve des formules asymptotiques pour les moments d'ordre 2 et 4 en se ramenant à des problèmes de nature diophantienne. La seconde famille considérée est celle des caractères primitifs et quadratiques associés à des discriminants fondamentaux d inférieurs à une certaine borne fixée. On donne une formule asymptotique pour le premier moment et une majoration pour le moment d'ordre 2 en utilisant des techniques de transformée de Mellin ainsi que des estimations sur les sommes de caractères. Dans les deux cas, on en déduit des résultats de non-annulation des fonctions thêta. On propose également un algorithme qui, pour beaucoup de caractères, se révèle en pratique efficace pour prouver la non-annulation sur l'axe réel positif des fonctions thêta ce qui entraîne la non-annulation sur le même axe des fonctions L associées
In this thesis, we focus on the study of theta functions involved in the proof of the functional equation of Dirichlet L- functions. In particular, we adapt some results obtained for L-functions to the case of theta functions. S. Chowla conjectured that Dirichlet L- functions associated to primitive characters χ don’t vanish at the central point of their functional equation. In a similar way to Chowla’s conjecture, it is conjectured that theta functions don't vanish at the central point of their functional equation for each primitive character. With the aim of proving this conjecture for a lot of characters, we study moments of theta functions in various families. We concentrate on two important families. The first one which we consider is the family of all Dirichlet characters modulo p where p is a prime number. In this case, we prove asymptotic formulae for the second and fourth moment of theta functions using diophantine techniques. The second family which we consider is the set of primitive quadratic characters associated to a fundamental discriminant less than a fixed bound. We give an asymptotic formula for the first moment and an upper bound for the second moment using techniques of Mellin transforms and estimation of character sums. In both cases, we deduce some results of non-vanishing. We also give an algorithm which, in practice, works well for a lot of characters to prove the non-vanishing of theta functions on the positive real axis. In this case, this implies in particular that the associated L-functions don’t vanish on the same axis
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Locci-Lopez, Daniel Eduardo. "Permian Basin Reservoir Quantitative Interpretation Applying the Multi-Scale Boxcar Transform Spectral Decomposition". Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10816133.

Texto completo
Resumen

The Short Time Fourier transform and the S-transform are among the most used methods of spectral decomposition to localize spectra in time and frequency. The S-transform utilizes a frequency-dependent Gaussian analysis window that is normalized for energy conservation purposes. The STFT, on the other hand, has a selected fixed time window that does not depend on frequency. In previous literature, it has been demonstrated that the S-transform distorts the Fourier spectra, shifting frequency peaks, and could result in misleading frequency attributes. Therefore, one way of making the S-transform more appropriate for quantitative seismic signal analysis is to ignore the conservation of energy over time requirement. This suggests a hybrid approach between the Short Time Fourier transform and the S-transform for seismic interpretation purposes. In this work, we introduce the Multi-Scale Boxcar transform that has temporal resolution comparable to the S-transform while giving correct Fourier peak frequencies. The Multi-Scale Boxcar transform includes a special analysis window that focusses the analysis on the highest amplitude portion of the Gaussian window, giving a more accurate time-frequency representation of the spectra in comparison with the S-transform. Post-stack seismic data with a strong well logs control was used to demonstrate the differences of the Multi-Scale Boxcar transform and the S-transform. The analysis area in this work is the Pennsylvanian and Lower Permian Horseshoe Atoll Carbonate play in the Midland Basin, a sub-basin in the larger Permian Basin. The Multi-Scale Boxcar transform spectral decomposition method improved the seismic interpretation of the study area, showing better temporal resolution for resolving the layered reservoirs? heterogeneity. The time and depth scale values on the figures are shifted according to the sponsor request, but the relative scale is correct.

Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Febrer, Pedro Maria Ulisses dos Santos Jalhay. "Residue sum formula for pricing options under the variance Gamma Model". Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/20802.

Texto completo
Resumen
Mestrado em Mathematical Finance
O resultado principal desta dissertação é a demonstração da fórmula de serie de soma tripla para o preço de uma opção Europeia induzido por um processo Variance Gamma. Com esta intenção, apresentamos certas propriedades e noções sobre processos de Lévy e análise complexa multidimensional, dando ênfase à aplicação do cálculo de resíduos ao integral Mellin-Barnes. Subsequentemente, iremos construir a representação na forma do integral Mellin-Barnes, em C^3, para o preço de uma opção e, apoiados pelo anteriormente mencionado cálculo de resíduos, deduziremos a representação em serie de soma tripla para o preço de uma opção Europeia e os seus correspondentes gregos. Para terminar, dando uso à nova formula, serão computados e discutidos alguns valores para um caso de estudo particular.
The main result of this dissertation is the proof of the triple sum series formula for the price of an European call option driven by the Variance Gamma process. With this intention, we present some notions and properties of Lévy processes and multidimensional complex analysis, with emphasis on the application of residue calculus to the Mellin-Barnes integral. Subsequently, we construct the Mellin-Barnes integral representation, in C^3, for the price of the option and, buttressed with the aforementioned residue calculus, we deduce the triple sum series representation for the price of the European option and its corresponding greeks. Finally, with the use of the new formula, some values for a particular case study are computed and discussed.
info:eu-repo/semantics/publishedVersion
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Balčiūnas, Aidas. "Dirichlė L funkcijų Melino transformacijos". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141209_112521-43945.

Texto completo
Resumen
Disertacijoje gautas Dirichlė L funkcijų modifikuotosios Melino transformacijos pratęsimas į visą kompleksinę plokštumą.
In the thesis a meremorphic continuation of Dirichlet L- functions to the whole complex plane have been obtained.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Stockwell, Robert G. "S-transform analysis of gravity wave activity from a small scale network of airglow imagers". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0016/NQ42554.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Remeikaitė, Solveiga. "Ribinė teorema Rymano dzeta funkcijos Melino transformacijai". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110802_165253-48345.

Texto completo
Resumen
Darbe pateikta funkcijų tyrimo apžvalga, svarbiausi žinomi rezultatai, suformuluota problema. Pagrindinė ribinė teorema įrodoma, taikant tikimybinius metodus, analizinių funkcijų savybes, aproksimavimo absoliučiai konvertuojančiu integralu principą.
The main limit theorem is proved using probabilistic methods, the analytical functions of the properties.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Erhart, Kevin. "A Laplace Transform Boundary Element Method Approach to the Solution of Large-Scale Transient Heat Conduction Problems". Honors in the Major Thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/705.

Texto completo
Resumen
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Mechanical Engineering
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Lamm, Rosalee Alicia. "Fine scale imaging of structures at and near the mantle transition zone using a generalized Radon transform". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37280.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2006.
Includes bibliographical references (p. 34-37).
We apply a generalized Radon transform (GRT) to the study of the transition zone discontinuities. The transition zone discontinuities, e.g. the '410' and '660', result from mineral phase changes that occur at depths constrained by temperature and chemistry, and can provide information about the conditions of the mantle within the transition zone. Previous global studies of topography on the transition zone discontinuities use SS data and are limited by the low lateral resolution provided by SS waves, on the order of about 1000 kilometers. The GRT employs inverse scattering theory to image perturbations in mass density and elastic parameters of a medium, and can resolve structure on the order of 100 kilometers; the limit of this resolution has yet to be tested. This study maps discontinuity depths in the northwest Pacific Ocean with a lateral spatial sampling of 1° and a vertical sampling of five kilometers. We observe striking variations in the depth, strength, and continuity of the '410', '520', and '660' seismic discontinuities, as well as the presence of structure beyond the bounds of what is traditionally considered to be the transition zone. Topographies on the '410' and '660' are alternately positively and negatively correlated, suggesting that both composition and temperature contribute to the observed depths. Preliminary analysis of '410' depths, which assumes no variations in chemistry, yields an upper bound for temperature variations of ±280K. Future tests and data preprocessing should further improve the GRT results.
by Rosalee Alicia Lamm.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kato, Fernando Hideki. "Análise de carteiras em tempo discreto". Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/12/12139/tde-24022005-005812/.

Texto completo
Resumen
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados.
In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Gao, Zhiyun. "Novel multi-scale topo-morphologic approaches to pulmonary medical image processing". Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/805.

Texto completo
Resumen
The overall aim of my PhD research work is to design, develop, and evaluate a new practical environment to generate separated representations of arterial and venous trees in non-contrast pulmonary CT imaging of human subjects and to extract quantitative measures at different tree-levels. Artery/vein (A/V) separation is of substantial importance contributing to our understanding of pulmonary structure and function, and immediate clinical applications exist, e.g., for assessment of pulmonary emboli. Separated A/V trees may also significantly boost performance of airway segmentation methods for higher tree generations. Although, non-contrast pulmonary CT imaging successfully captures higher tree generations of vasculature, A/V are indistinguishable by their intensity values, and often, there is no trace of intensity variation at locations of fused arteries and veins. Patient-specific structural abnormalities of vascular trees further complicate the task. We developed a novel multi-scale topo-morphologic opening algorithm to separate A/V trees in non-contrast CT images. The algorithm combines fuzzy distance transform, a morphologic feature, with a topologic connectivity and a new morphological reconstruction step to iteratively open multi-scale fusions starting at large scales and progressing towards smaller scales. The algorithm has been successfully applied on fuzzy vessel segmentation results using interactive seed selection via an efficient graphical user interface developed as a part of my PhD project. Accuracy, reproducibility and efficiency of the system are quantitatively evaluated using computer-generated and physical phantoms along with in vivo animal and human data sets and the experimental results formed are quite encouraging. Also, we developed an arc-skeleton based volumetric tree generation algorithm to generate multi-level volumetric tree representation of isolated arterial/venous tree and to extract vascular measurements at different tree levels. The method has been applied on several computer generated phantoms and CT images of pulmonary vessel cast and in vivo pulmonary CT images of a pig at different airway pressure. Experimental results have shown that the method is quite accurate and reproducible. Finally, we developed a new pulmonary vessel segmentation algorithm, i.e., a new anisotropic constrained region growing method that encourages axial region growing while arresting cross-structure leaking. The region growing is locally controlled by tensor scale and structure scale and anisotropy. The method has been successfully applied on several non-contrast pulmonary CT images of human subjects. The accuracy of the new method has been evaluated using manually selection of vascular and non-vascular voxels and the results found are very promising.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Betah, Mohamed Haye. "Un théorème de Gallagher pour la fonction de Möbius". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0461/document.

Texto completo
Resumen
La fonction de Möbius est définie par$$\mu(n)= \begin{cases} 1 & \textit{si $n=1$},\\ (-1)^k& \textit{si n est le produit de k nombres premiers distincts,}\\ 0 & \textit{si n contient un facteur carré. } \end{cases}$$Nous avons démontré que pour $x \ge \exp( 10^9) $ et $h=x^{1-\frac{1}{16000}}$, il existe dans chaque intervalle $[x-h,x]$ des entiers $n_1$ avec $\mu(n_1)=1$ et des entiers $n_2$ avec $\mu(n_2)=-1$.\\Ce résultat est une conséquence d'un résultat plus général.\\Pour $x \ge \exp(4\times 10^6)$, $\frac{1}{\sqrt{\log x}} \le \theta \le \frac{1}{2000}$, $h=x^{1-\theta}$ et $Q=(x/h)^{\frac{1}{20}}$, nous avons \\$$\sum_{q \leq Q} \log(Q/q)\sum_{\chi mod q}^*\left| \sum_{x.-h\le n \le x} \mu(n) \chi(n) \right| \leq 10^{20} h \theta \log(x) \exp( \frac{-1}{300 \theta}); $$la somme $\sum^*$ portant sur les caractères primitifs sauf l'éventuel caractère exceptionnel.\\Et en particulier pour $x \ge \exp( 10^9)$,$$ \left | \sum_{x.-x^{1-\frac{1}{16000}}\le n \le x} \mu(n) \right | \le \frac{1}{100} x^{1-\frac{1}{16000}}.\\$$
The Möbius function is defined by$$\mu(n)= \begin{cases} 1 & \textit{if $n=1$},\\ (-1)^k& \textit{if n is a product of k distinct prime numbers,}\\ 0 & \textit{if n contains a square factor. } \end{cases}$$We demonstrate that for $x \ge \exp( 10^9) $ and $h=x^{1-\frac{1}{16000}}$, it exists in each interval $[x-h,x]$ integers $n_1$ with $\mu(n_1)=1$ and integers $n_2$ with $\mu(n_2)=-1$.\\This result is a consequence of a more general result. \\For $x \ge \exp(4\times 10^6)$, $\frac{1}{\sqrt{\log x}} \le \theta \le \frac{1}{2000}$, $h=x^{1-\theta}$ et $Q=(x/h)^{\frac{1}{20}}$, we have \\ $$\sum_{q \leq Q} \log(Q/q)\sum_{\chi mod q}^*\left| \sum_{x-h \le n \le x} \mu(n) \chi(n) \right| \leq 10^{20} h \theta \log(x) \exp( \frac{-1}{300 \theta}); $$the sum $\sum^*$ relating to primitive characters except for possible exceptional character.\\And in particular for $x \ge \exp( 10^9)$,$$\left | \sum_{x-.x^{1-\frac{1}{16000}}\le n \le x} \mu(n) \right | \le \frac{1}{100} x^{1-\frac{1}{16000}}.$$
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Upadhyay, Manas Vijay. "On the role of defect incompatibilities on mechanical properties of polycrystalline aggregates: a multi-scale study". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53041.

Texto completo
Resumen
The main objective of this thesis is to obtain critical insight on the role of crystalline incompatibilities in strain and curvature, induced in presence of line defects i.e. dislocations and disclinations, on the energy and geometry of specific features of the local microstructure, and on the bulk mechanical response of nanocrystalline/ultra-fine grained materials. To that end, studies are performed at the (1) inter-atomic and fine scale, and (2) at the mesoscale. The modelling approach is based on the field dislocation and disclination mechanics theory of continuously representated dislocations and disclinations. New, thermodynamically rigorous, multi-scale elastic constitutive laws based on the couple stress theory are developed to capture the effect of strain and curvature incompatibilities on the Cauchy and couple stresses. A new meso-scale elasto-viscoplastic constitutive model of defect incompatibilities based on a fast Fourier transform technique is developed. The desired scale transitioning is achieved via novel phenomenological defect density transport equations and the newly developed elastic constitutive laws. At the fine scale, the model is applied to study energetic interactions between strain and curvature incompatibilities associated with grain boundaries and their influence on triple line energies. Results reveal that incompatible lattice strains have the most significant contribution to the energy. Incompatible lattice curvatures have negligible energetic contributions but are necessary to characterize the geometry of grain boundaries. Finally, both incompatible lattice strains and curvatures are necessary to capture the structure sensitive mechanical behavior of grain boundaries. At the mesoscale, deformation of nanocrystalline aggregates characterized by residual curvatures is studied to identify the impact of the latter's presence on the local and bulk mechanical response of the aggregate. Relaxation of local stresses generated from residual curvatures reproduces the effect of GB dislocation emission. Uniaxial tensile loading of nanocrystalline microstructures containing residual curvatures reveals a softening in the yield stress which could explain the breakdown in Hall-Petch law in the nanocrystalline regime. Next, the possibility of characterizing incompatibilities using X-ray or neutron diffraction techniques is tested. Results reveal that only strains and their gradients contribute to the broadening of diffraction peaks; curvatures and their gradients have no contribution. This study leads to the development of a new multi-scale averaged strain based Fourier technique for generating virtual diffraction peaks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Mavridou, Evanthia. "Robust image description with laplacian profile and radial Fourier transform". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM065/document.

Texto completo
Resumen
L'objectif de cette thèse est l'étude d'un descripteur d'images adapté à une grande variété d'applications. Nous cherchons à obtenir un descripteur robuste et discriminant, facile à adapter et peu coûteux en calcul et en mémoire.Nous définissons un nouveau descripteur, composé de valeurs du Laplacien à différentes échelles et de valeurs d'une transformée de Fourier radiale, calculées à partir d'une pyramide Gaussienne. Ce descripteur capture une information de forme multi-échelle autour d'un point de l'image. L'expérimentation a montré que malgré une taille mémoire réduite les performances en robustesse et en pouvoir discriminant de ce descripteur sont à la heuteur de l'état de l'art.Nous avons expérimenté ce descripteur avec trois types de tâches différentes.Le premier type de tâche est la mise en correspondance de points-clés avec des images transformées par rotation, changement d'échelle, floutage, codage JPEG, changement de point de vue, ou changement d'éclairage. Nous montrons que la performance de notre descripteur est au niveau des meilleurs descripteurs connus dans l'état de l'art. Le deuxième type de tâche est la détection de formes. Nous avons utilisé le descripteur pour la création de deux détecteurs de personnes, construits avec Adaboost. Comparé à un détecteur semblable construit avec des histogrammes de gradients (HOG) nos détecteurs sont très compétitifs tout en utilisant des descripteurs sensiblement plus compacts. Le dernier type de tâche est la détection de symétries de réflexion dans des images "du monde réel". Nous proposons une technique de détection d'axes potentiels de symétries en miroir. Avec cette tâche nous montrons que notre descripteur peut être genéralisé à des situations complexes. L'expérimentation montre que cette méthode est robuste et discriminante, tout en conservant un faible coût en calcul et en mémoire
In this thesis we explore a new image description method composed of a multi-scale vector of Laplacians of Gaussians, the Laplacian Profile, and a Radial Fourier Transform. This method captures shape information with different proportions around a point in the image. A Gaussian pyramid of scaled images is used for the extraction of the descriptor vectors. The aim of this new method is to provide image description that can be suitable for diverse applications. Adjustability as well as low computational and memory needs are as important as robustness and discrimination power. We created a method with the ability to capture the image signal efficiently with descriptor vectors of particularly small length compared to the state of the art. Experiments show that despite its small vector length, the new descriptor shows reasonable robustness and discrimination power that are competitive to the state of the art performance.We test our proposed image description method on three different visual tasks. The first task is keypoint matching for images that have undergone image transformations like rotation, scaling, blurring, JPEG compression, changes in viewpoint and changes in light. We show that against other methods from the state of the art, the proposed descriptor performs equivalently with a very small vector length. The second task is on pattern detection. We use the proposed descriptor to create two different Adaboost based detectors for people detection in images. Compared to a similar detector using Histograms of Oriented Gradients (HOG), the detectors with the proposed method show competitive performance using significantly smaller descriptor vectors. The last task is on reflection symmetry detection in real world images. We introduce a technique that exploits the proposed descriptor for detecting possible symmetry axes for the two reflecting parts of a mirror symmetric pattern. This technique introduces constraints and ideas of how to collect more efficiently the information that is important to identify reflection symmetry in images. With this task we show that the proposed descriptor can be generalized for rather complicated applications. The set of the experiments confirms the qualities of the proposed method of being easily adjustable and requires relatively low computational and storage requirements while remaining robust and discriminative
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Horiguchi, Mitsuaki. "Observational Studies of Large-Scale Turbulence Structures in the Near-Neutral Atmospheric Boundary Layer". 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/198930.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Al-Gindy, Ahmed M. N. "Design and analysis of Discrete Cosine Transform-based watermarking algorithms for digital images. Development and evaluation of blind Discrete Cosine Transform-based watermarking algorithms for copyright protection of digital images using handwritten signatures and mobile phone numbers". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5450.

Texto completo
Resumen
This thesis deals with the development and evaluation of blind discrete cosine transform-based watermarking algorithms for copyright protection of digital still images using handwritten signatures and mobile phone numbers. The new algorithms take into account the perceptual capacity of each low frequency coefficients inside the Discrete Cosine Transform (DCT) blocks before embedding the watermark information. They are suitable for grey-scale and colour images. Handwritten signatures are used instead of pseudo random numbers. The watermark is inserted in the green channel of the RGB colour images and the luminance channel of the YCrCb images. Mobile phone numbers are used as watermarks for images captured by mobile phone cameras. The information is embedded multiple-times and a shuffling scheme is applied to ensure that no spatial correlation exists between the original host image and the multiple watermark copies. Multiple embedding will increase the robustness of the watermark against attacks since each watermark will be individually reconstructed and verified before applying an averaging process. The averaging process has managed to reduce the amount of errors of the extracted information. The developed watermarking methods are shown to be robust against JPEG compression, removal attack, additive noise, cropping, scaling, small degrees of rotation, affine, contrast enhancements, low-pass, median filtering and Stirmark attacks. The algorithms have been examined using a library of approximately 40 colour images of size 512 512 with 24 bits per pixel and their grey-scale versions. Several evaluation techniques were used in the experiment with different watermarking strengths and different signature sizes. These include the peak signal to noise ratio, normalized correlation and structural similarity index measurements. The performance of the proposed algorithms has been compared to other algorithms and better invisibility qualities with stronger robustness have been achieved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Barusseau, Benoit. "Propriétés spectrales des opérateurs de Toeplitz". Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14027/document.

Texto completo
Resumen
La première partie de la thèse réunit des résultats classiques sur l’espace de Hardy, les espaces modèles et l’espace de Bergman. Puis sur cette base, nous exposons des travaux relatifs aux opérateurs de Toeplitz, en particulier, nous présentons la description du spectre et du spectre essentiel de ces opérateurs sur l’espace de Hardy et de Bergman. La première partie de notre recherche tire son inspiration de deux faits établis pour un opérateur de Toeplitz T. Premièrement, sur l’espace de Hardy, la norme de T, la norme essentielle de T et la norme infinie du symbole de T sont égales. Nous étudions ce cas d’égalité sur l’espace de Bergman pour les opérateurs de Toeplitz à symbole quasihomogène et radial. Deuxièmement, sur l’espace de hardy, le spectre et le spectre essentiel sont fortement liés à l’image du symbole de T. Nous étudions le cas d’égalité entre le spectre et l’image essentielle du symbole pour les symboles quasihomogènes et radials. Pour répondre à ces deux questions, nous utilisons la transformée de Berezin, les coefficients de Mellin et la moyenne du symbole. La dernière partie de la thèse s’interesse au théorème de Szegö qui donne un lien entre les valeurs propres d’une suite de matrices de Toeplitz de taille n, et le symbole de cette suite de matrice. Nous donnons un résultat du même type sur l’espace de Bergman pour les symboles harmoniques sur le disque et continus sur le cercle. Enfin, nous étudions une généralisation de ce théorème en compressant l’opérateur de Toeplitz sur une suite d’espaces modèles de dimension finie
This thesis deals with the spectral properties of the Toeplitz operators in relation to their associated symbol. In the first part, we give some classical results about Hardy space, model spaces and Bergman space. Afterwards, we expose some results about Toeplitz operator on the Hardy space. In particular, we discuss their spectrum and essential spectrum. Our work is inspired from two facts which have been proved on the Hardy space. First, considering a Toeplitz operator T, the norm, essential norm, spectral radius of T and the supremum of its symbol are equal. Secondly, on the Hardy space, spectrum, essential spectrum and essential range are strongly related. We answer the question of the equality between the norms, the spectral radius and the supremum of the symbol and between spectrum and essential range on the Bergman space. We look at these two properties on the Bergman space when the symbol is radial or quasihomogeneous. We answer these questions using the Berezin transform, the Mellin coefficients and the mean value of the symbol. The last part deals with the classical Szegö theorem which underline a link between the eigenvalues of a Toeplitz matrix sequence and its symbol. We give a result of the same type on Bergman space considering harmonic symbol wich have a continuous extension. We give a generalization, considering the sequence of the compressions of a Toeplitz operator on a sequence of model spaces
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Raghuram, Anchit. "Robust Localization of Research Concept Vehicle (RCV) in Large Scale Environment". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-221846.

Texto completo
Resumen
Autonomous vehicles in the recent era are robust vehicles that have the capability to drive themselves without human involvement using sensors and Simultaneous Localization and Mapping algorithms, which helps the vehicle gain an understanding of its environment while driving with the help of laser scanners (Velodyne), IMU and GPS to collect data and solidify the foundation for locating itself in an unknown environment. Various methods were studied and have been tested for increasing the efficiency of registration and optimization over the years but the implementation of the NDT library for mapping and localization have been found to be fast and more accurate as compared to conventional methods. The objective of this thesis is to ascertain a robust method of pose estimation of the vehicle by combining data from the laser sensor, with the data from the IMU and GPS receiver on the vehicle. The initial estimate prediction of the position is achieved by generating a 3D map using the Normal Distribution Transform and estimating the position using the NDT localization algorithm and the GPS data collected by driving the vehicle in an external environment. The results presented explain and verify the hypothesis being stated and shows the comparison of the localization algorithm implemented with the GPS receiver data available on the vehicle while driving.
Autonoma fordon har på senare tid utvecklats till robusta fordon som kan köra sig själva utan hjälp av en människa, detta har möjliggjorts genom användandet av sensorer och algoritmer som utför lokalisering och kartläggning samtidigt (SLAM). Dessa sensorer och algoritmer hjälper fordonet att förstå dess omgivning medan det kör och tillsammans med laser skanners (Velodyne), IMU'er och GPS läggs grunden för att kunna utföra lokalisering i en okänd miljö. Ett flertal metoder har studerats och testats för att förbättra effektiviteten av registrering och optimering under åren men implementationen av NDT biblioteket för kartläggning och lokalisering har visat sig att vara snabbt och mer exakt jämfört med konventionella metoder. Målet med detta examensarbete är att hitta en robust metod för uppskatta pose genom att kombinera data från laser sensorn, en uppskattning av den ursprungliga positionen som fås genom att generera en 3D karta med hjälp av normalfördelningstransformen och GPS data insamlad från körningar i en extern miljö. Resultaten som presenteras beskriver och verifierar den hypotes som läggs fram och visar jämförelsen av den implementerade lokaliseringsalgoritmen med GPS data tillgänglig på fordonet under körning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Sun, Wenjuan. "Quantification of Morphological Characteristics of Aggregates at Multiple Scales". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/71787.

Texto completo
Resumen
Properties of aggregates are affected by their morphological characteristics, including shape factors, angularity and texture. These morphological characteristics influence the aggregate's mutual interactions and strengths of bonds between the aggregates and the binder. The interactions between aggregates and bond strengths between the aggregate and the binder are vital to rheological properties, related to workability and friction resistance of mixtures. As a consequence, quantification of the aggregate's morphological characteristics is essential for better quality control and performance improvement of aggregates. With advancement of hardware and software, the computation capability has reached the stage to rapidly quantify morphological characteristics at multiple scales using digital imaging techniques. Various computational algorithms have been developed, including Hough transform, Fourier transform, and wavelet analysis, etc. Among the aforementioned computational algorithms, Fourier transform has been implemented in various areas by representing the original image/signal in the spatial domain as a summation of representing functions of varying magnitudes, frequencies and phases in the frequency domain. This dissertation is dedicated to developing the two-dimensional Fourier transform (FFT2) method using the Fourier Transform Interferometry (FTI) system that is capable to quantify aggregate morphological characteristics at different scales. In this dissertation, FFT2 method is adopted to quantify angularity and texture of aggregates based on surface coordinates acquired from digital images in the FTI system. This is followed by a comprehensive review on prevalent aggregate imaging techniques for the quantification of aggregate morphological characteristics, including the second generation of Aggregate Image Measurement System (AIMS II), University of Illinois Aggregate Image Analyzer (UIAIA), the FTI system, etc. Recommendations are made on the usage of aggregate imaging system in the measurements of morphological parameters that are interested. After that, the influence of parent rock, crushing, and abrasion/polishing on aggregate morphological characteristics are evaluated. Atomic-scale roughness is calculated for crystal structures of five representative minerals in four types of minerals (i.e., α-quartz for quartzite/granite/gravel/aplite, dolomite for dolomite, calcite for limestone, haematite and magnetite for iron ore); roughness ranking at atomic-scale is further compared with surface texture ranking at macroscale based on measurement results using the FTI system and AIMS II. Morphological characteristics of aggregates before and after crushing test and micro-deval test are measured to quantitatively evaluate the influences of the crushing process and the abrasion/polishing process on morphological characteristics of aggregates, respectively.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Gabriel, Eric [Verfasser]. "Automatic Multi-Scale and Multi-Object Pedestrian and Car Detection in Digital Images Based on the Discriminative Generalized Hough Transform and Deep Convolutional Neural Networks / Eric Gabriel". Kiel : Universitätsbibliothek Kiel, 2019. http://d-nb.info/1187732826/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Kiyohara, Takayuki. "Generation of heralded multi-photon parallel state for realizing a large-scale photonic quantum circuit". Kyoto University, 2020. http://hdl.handle.net/2433/253284.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Decombas, Marc. "Compression vidéo très bas débit par analyse du contenu". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.

Texto completo
Resumen
L’objectif de cette thèse est de trouver de nouvelles méthodes de compression sémantique compatible avec un encodeur classique tel que H.264/AVC. . L’objectif principal est de maintenir la sémantique et non pas la qualité globale. Un débit cible de 300 kb/s a été fixé pour des applications de sécurité et de défense Pour cela une chaine complète de compression a dû être réalisée. Une étude et des contributions sur les modèles de saillance spatio-temporel ont été réalisées avec pour objectif d’extraire l’information pertinente. Pour réduire le débit, une méthode de redimensionnement dénommée «seam carving » a été combinée à un encodeur H.264/AVC. En outre, une métrique combinant les points SIFT et le SSIM a été réalisée afin de mesurer la qualité des objets sans être perturbée par les zones de moindre contenant la majorité des artefacts. Une base de données pouvant être utilisée pour des modèles de saillance mais aussi pour de la compression est proposée avec des masques binaires. Les différentes approches ont été validées par divers tests. Une extension de ces travaux pour des applications de résumé vidéo est proposée
The objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Auvray, Alexis. "Contributions à l'amélioration de la performance des conditions aux limites approchées pour des problèmes de couche mince en domaines non réguliers". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC018/document.

Texto completo
Resumen
Les problèmes de transmission avec couche mince sont délicats à approcher numériquement, en raison de la nécessité de construire des maillages à l’échelle de la couche mince. Il est courant d’éviter ces difficultés en usant de problèmes avec conditions aux limites approchées — dites d’impédance. Si l’approximation des problèmes de transmission par des problèmes d’impédance s’avère performante dans le cas de domaines réguliers, elle l’est beaucoup moins lorsque ceux-ci comportent des coins ou arêtes. L’objet de cette thèse est de proposer de nouvelles conditions d’impédance, plus performantes, afin de corriger cette perte de performance. Pour cela, les développements asymptotiques des différents problèmes-modèles sont construits et étudiés afin de localiser avec précision l’origine de la perte, en lien avec les profils singuliers associés aux coins et arêtes. De nouvelles conditions d’impédance sont construites, de type Robin multi-échelle ou Venctel. D’abord étudiées en dimension 2, elles sont ensuite généralisées à certaines situations en dimension 3. Des simulations viennent confirmer l’efficience des méthodes théoriques
Transmission problems with thin layer are delicate to approximate numerically, because of the necessity to build meshes on the scale of the thin layer. It is common to avoid these difficulties by using problems with approximate boundary conditions — also called impedance conditions. Whereas the approximation of transmission problems by impedance problems turns out to be successful in the case of smooth domains, the situation is less satisfactory in the presence of corners and edges. The goal of this thesis is to propose new impedance conditions, more efficient, to correct this lack of performance. For that purpose, the asymptotic expansions of the various models -problems are built and studied to locate exactly the origin of the loss, in connection with the singular profiles associated to corners and edges. New impedance conditions are built, of multi-scale Robin or Venctel types. At first studied in dimension 2, they are then generalized in certain situations in dimension 3. Simulations have been carried out to confirm the efficiency of the theoretical methods to some
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Ciftci, Gunce. "Shape Analysis Using Contour-based And Region-based Approaches". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1092121/index.pdf.

Texto completo
Resumen
The user of an image database often wishes to retrieve all images similar to the one (s)he already has. In this thesis, shape analysis methods for retrieving shape are investigated. Shape analysis methods can be classified in two groups as contour-based and region-based according to the shape information used. In such a classification, curvature scale space (CSS) representation and angular radial transform (ART) are promising methods for shape similarity retrieval respectively. The CSS representation operates by decomposing the shape contour into convex and concave sections. CSS descriptor is extracted by using the curvature zero-crossings behaviour of the shape boundary while smoothing the boundary with Gaussian filter. The ART descriptor decomposes the shape region into a number of orthogonal 2-D basis functions defined on a unit disk. ART descriptor is extracted using the magnitudes of ART coefficients. These methods are implemented for similarity comparison of binary images and the retrieval performances of descriptors for changing number of sampling points of boundary and order of ART coefficients are investigated. The experiments are done using 1000 images from MPEG7 Core Experiments Shape-1. Results show that for different classes of shape, different descriptors are more successful. When the choice of approach depends on the properties of the query shape, similarity retrieval performance increases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Gérossier, Franck. "Localisation et cartographie simultanées en environnement extérieur à partir de données issues d'un radar panoramique hyperfréquence". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00864181.

Texto completo
Resumen
Le SLAM, " Simultaneous Localisation And Mapping ", représente à l'heure actuelle l'une des principales thématiques investiguées dans le domaine des robots mobiles autonomes. Il permet, à l'aide de capteurs extéroceptifs (laser, caméra, radar, etc.) et proprioceptifs (odomètre, gyromètre, etc.), de trouver l'orientation et la localisation d'un robot dans un environnement extérieur vaste, inconnu ou modifié, avec la possibilité de créer une carte au fur et à mesure des déplacements du véhicule. Les travaux de thèse décrits dans ce manuscrit s'intègrent dans ce courant de recherche. Ils visent à développer un SLAM innovant qui utilise un radar à modulation de fréquence continue " FMCW " comme capteur extéroceptif. Ce capteur est insensible aux conditions climatiques et possède une portée de détection importante. Néanmoins, c'est un capteur tournant qui, dans une utilisation mobile, va fournir des données corrompues par le déplacement du véhicule. Pour mener à bien ces travaux, nous avons proposés différentes contributions : une correction de la distorsion par l'utilisation de capteurs proprioceptifs ; le développement d'une technique de localisation et cartographie simultanées nommée RS-SLAM-FMT qui effectue un scan matching sur les observations et utilise un algorithme estimatif de type EKF-SLAM ; l'utilisation, pour la première fois en SLAM, de la mise en correspondance par Transformée de Fourier-Mellin pour réaliser l'opération de scan matching ; la création d'un outil expérimental pour déterminer la matrice de covariance associée aux observations ; des tests de robustesse de l'algorithme dans des conditions d'utilisation réelles : dans des zones avec un faible nombre de points d'intérêts, sur des parcours effectués à vitesse élevée, dans des environnements péri-urbains avec une forte densité d'objets mobiles ; la réalisation d'une application temps réel pour le test du procédé sur un véhicule d'exploration qui se déplace dans un environnement extérieur vaste.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.

Texto completo
Resumen
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Walser, Stefan Franz [Verfasser], Rafael [Akademischer Betreuer] [Gutachter] Macián-Juan y Rafael [Gutachter] Miró-Herrero. "Development and Application of a Fourier Transform based Methodology for the Identification of Instability in Boiling Water Reactors at a local Scale / Stefan Franz Walser ; Gutachter: Rafael Miró-Herrero, Rafael Macián-Juan ; Betreuer: Rafael Macián-Juan". München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1136078088/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Ramos, Lucas Alexandre [UNESP]. "Análise multiescala de formas planas baseada em estatísticas da transformada de Hough". Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/143872.

Texto completo
Resumen
Submitted by Lucas Alexandre Ramos null (magrelolukas@hotmail.com) on 2016-09-12T11:55:17Z No. of bitstreams: 1 Monografia_Final.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5)
Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-09-14T17:56:52Z (GMT) No. of bitstreams: 1 ramos_la_me_bauru.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5)
Made available in DSpace on 2016-09-14T17:56:52Z (GMT). No. of bitstreams: 1 ramos_la_me_bauru.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5) Previous issue date: 2016-08-12
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Atualmente, dada a difusão dos computadores, a tarefa de se reconhecer padrões visuais está sendo cada vez mais automatizada, em especial para tratar a vasta e crescente quantidade de imagens digitais existentes. Aplicações de diversas áreas como biometria, recuperação de imagens baseada em conteúdo e diagnóstico médico, se valem do processamento de imagens, bem como de técnicas de extração e análise de características das mesmas, a fim de identificar pessoas, objetos, gestos, textos, etc. As características básicas que são utilizadas para a análise de imagens são: cor, textura e forma. Recentemente, foi proposto um novo descritor de formas denominado HTS (Hough Transform Statistics), o qual se baseia no espaço de Hough para representar e reconhecer objetos em imagens por suas formas. Os resultados obtidos pelo HTS sobre bases de imagens públicas têm mostrado que este novo descritor, além de apresentar altas taxas de acurácia, melhores do que muitos descritores tradicionais propostos na literatura, é rápido, pois tem um algoritmo de complexidade linear. O objetivo deste trabalho foi explorar as possibilidades de representação multiescala do HTS e, assim, propor novos descritores de formas. Escala é um parâmetro essencial em Visão Computacional e a teoria de espaço-escala refere-se ao espaço formado quando se observa, simultaneamente, os aspectos espaciais de uma imagem em várias escalas, sendo a escala a terceira dimensão. Os novos métodos multiescala propostos foram avaliados sobre várias bases de dados e seus desempenhos foram comparados com o desempenho do HTS e com os principais descritores de formas encontrados na literatura. Resultados experimentais mostraram que os novos descritores propostos neste trabalho são mais rápidos e em alguns casos também mais precisos.
Currently, given the widespread of computers through society, the task of recognizing visual patterns is being more and more automated, in particular to treat the large and growing amount of digital images available. Applications in many areas, such as biometrics, content-based image retrieval, and medical diagnostic, make use of image processing, as well as techniques for the extraction and analysis of their characteristics, in order to identify persons, objects, gestures, texts, etc. The basic features that are used for image analysis are: color, texture and shape. Recently, it was proposed a new shape descriptor called HTS (Hough Transform Statistics), which is based on the Hough space to represent and recognize objects in images by their shapes. The results obtained by HTS on public image databases have shown that this new shape descriptor, besides showing high accuracy levels, better than many traditional shape descriptors proposed in the literature, is fast, since it has an algorithm of linear complexity. In this dissertation we explored the possibilities of a multiscale and scale-space representation of this new shape descriptor. Scale is a key parameter in Computer Vision and the theory of scale-space refers to the space formed when observing, simultaneously, special aspects of an image at several scales, being the scale the third dimension. The multiscale HTS methods were evaluated on the same databases and their performances were compared with the main shape descriptors found in the literature and with the monoscale HTS. Experimental results showed that these new descriptors are faster and can also be more accurate in some cases.
FAPESP: 2014/10611-0
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sahin, Yavuz. "A Programming Framework To Implement Rule-based Target Detection In Images". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610213/index.pdf.

Texto completo
Resumen
An expert system is useful when conventional programming techniques fall short of capturing human expert knowledge and making decisions using this information. In this study, we describe a framework for capturing expert knowledge under a decision tree form and this framework can be used for making decisions based on captured knowledge. The framework proposed in this study is generic and can be used to create domain specific expert systems for different problems. Features are created or processed by the nodes of decision tree and a final conclusion is reached for each feature. Framework supplies 3 types of nodes to construct a decision tree. First type is the decision node, which guides the search path with its answers. Second type is the operator node, which creates new features using the inputs. Last type of node is the end node, which corresponds to a conclusion about a feature. Once the nodes of the tree are developed, then user can interactively create the decision tree and run the supplied inference engine to collect the result on a specific problem. The framework proposed is experimented with two case studies
"
Airport Runway Detection in High Resolution Satellite Images"
and "
Urban Area Detection in High Resolution Satellite Images"
. In these studies linear features are used for structural decisions and Scale Invariant Feature Transform (SIFT) features are used for testing existence of man made structures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Van, Gaalen Joseph Frank. "Alternative Statistical Methods for Analyzing Geological Phenomena: Bridging the Gap Between Scientific Disciplines". Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3424.

Texto completo
Resumen
When we consider the nature of the scientific community in conjunction with a sense of typical economic circumstances we find that there are two distinct paths for development. One path involves hypothesis testing and evolution of strategies that are linked with iterations in equipment advances. A second, more complicated scenario, can involve external influences whether economic, political, or otherwise, such as the government closure of NASA's space program in 2011 which will no doubt influence research in associated fields. The following chapters are an account of examples of two statistical techniques and the importance of both on the two relatively unrelated geological fields of coastal geomorphology and ground water hydrology. The first technique applies a multi-dimensional approach to defining groundwater water table response based on precipitation in areas where it can reasonably be assumed to be the only recharge. The second technique applies a high resolution multi-scalar approach to a geologic setting most often restricted to either high resolution locally, or low resolution regionally. This technique uses time-frequency analysis to characterize cuspate patterns in LIDAR data are introduced using examples from the Atlantic coast of Florida, United States. These techniques permit the efficient study of beachface landforms over many kilometers of coastline at multiple spatial scales. From a LIDAR image, a beach-parallel spatial series is generated. Here, this series is the shore-normal position of a specific elevation (contour line). Well-established time-frequency analysis techniques, wavelet transforms, and S-Transforms, are then applied to the spatial series. These methods yield results compatible with traditional methods and show that it is useful for capturing transitions in cuspate shapes. To apply this new method, a land-based LIDAR study allowing for rapid high-resolution surveying is conducted on Melbourne Beach, Florida and Tairua Beach, New Zealand. Comparisons and testing of two different terrestrial scanning stations are evaluated during the course of the field investigation. Significant cusp activity is observed at Melbourne Beach. Morphological observations and sediment analysis are used to study beach cusp morphodynamics at the site. Surveys at Melbourne were run ~500 m alongshore and sediment samples were collected intertidally over a five-day period. Beach cusp location within larger scale beach morphology is shown to directly influence cusp growth as either predominantly erosional or accretional. Sediment characteristics within the beach cusp morphology are reported coincident with cusp evolution. Variations in pthesis size distribution kurtosis are exhibited as the cusps evolve; however, no significant correlation is seen between grain size and position between horn and embayment. During the end of the study, a storm resulted in beach cusp destruction and increased sediment sorting. In the former technique using multi-dimensional studies, a test of a new method for improving forecasting of surficial aquifer system water level changes with rainfall is conducted. The results provide a more rigorous analysis of common predictive techniques and compare them with the results of the tested model. These results show that linear interpretations of response-to-rainfall data require a clarification of how large events distort prediction and how the binning of data can change the interpretation. Analyses show that the binning ground water recharge data as is typically done in daily format may be useful for quick interpretation but only describes how fast the system responds to an event, not the frequency of return of such a response. Without a secure grasp on the nonlinear nature of water table and rainfall data alike, any binning or isolation of specific data carries the potential for aliasing that must be accounted for in an interpretation. The new model is proven capable of supplanting any current linear regression analysis as a more accurate means of prediction through the application of a multivariate technique. Furthermore, results show that in the Florida surficial aquifer system response-to-rainfall ratios exhibit a maxima most often linked with modal stage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Murtin, Chloé Isabelle. "Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI081/document.

Texto completo
Resumen
La profondeur possible d’imagerie en laser-scanning microscopie est limitée non seulement par la distance de travail des lentilles de objectifs mais également par la dégradation de l’image causée par une atténuation et une diffraction de la lumière passant à travers l’échantillon. Afin d’étendre cette limite, il est possible, soit de retourner le spécimen pour enregistrer les images depuis chaque côté, or couper progressivement la partie supérieure de l’échantillon au fur et à mesure de l‘acquisition. Les différentes images prises de l’une de ces manières doivent ensuite être combinées pour générer un volume unique. Cependant, des mouvements de l’échantillon durant les procédures d’acquisition engendrent un décalage non seulement sur en translation selon les axes x, y et z mais également en rotation autour de ces même axes, rendant la fusion entres ces multiples images difficile. Nous avons développé une nouvelle approche appelée 2D-SIFT-in-3D-Space utilisant les SIFT (scale Invariant Feature Transform) pour atteindre un recalage robuste en trois dimensions de deux images. Notre méthode recale les images en corrigeant séparément les translations et rotations sur les trois axes grâce à l’extraction et l’association de caractéristiques stables de leurs coupes transversales bidimensionnelles. Pour évaluer la qualité du recalage, nous avons également développé un simulateur d’images de laser-scanning microscopie qui génère une paire d’images 3D virtuelle dans laquelle le niveau de bruit et les angles de rotations entre les angles de rotation sont contrôlés avec des paramètres connus. Pour une concaténation précise et naturelle de deux images, nous avons également développé un module permettant une compensation progressive de la luminosité et du contraste en fonction de la distance à la surface de l’échantillon. Ces outils ont été utilisés avec succès pour l’obtention d’images tridimensionnelles de haute résolution du cerveau de la mouche Drosophila melanogaster, particulièrement des neurones dopaminergiques, octopaminergiques et de leurs synapses. Ces neurones monoamines sont particulièrement important pour le fonctionnement du cerveau et une étude de leur réseau et connectivité est nécessaire pour comprendre leurs interactions. Si une évolution de leur connectivité au cours du temps n’a pas pu être démontrée via l’analyse de la répartition des sites synaptiques, l’étude suggère cependant que l’inactivation de l’un de ces types de neurones entraine des changements drastiques dans le réseau neuronal
Although laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

May, Michael. "Data analytics and methods for improved feature selection and matching". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Texto completo
Resumen
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Dolník, Matej. "Lerchova věta v teorii časových škál a její důsledky pro zlomkový kalkulus". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318152.

Texto completo
Resumen
Hlavním zájmem diplomové práce je studium zobecněné nabla Laplaceové transformace na časových škálach a její jednoznačnosti, včetně důkazu jednoznačnosti a aplikace jednoznačnosti v zlomkovém kalkulu na časových škálach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Nguyen, Thi trang. "Contribution to peroidic homogenization of a spectral problem and of the wave equation". Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2027/document.

Texto completo
Resumen
Dans cette thèse, nous présentons des résultats d’homogénéisation périodique d’un problème spectral et de l’équation d’ondes de Bloch. Il permet de modéliser les ondes à basse et haute fréquences. La partie modèle à basse fréquence est bien connu et n’est pas donc abordée dans ce travail. A contrario ; la partie à haute fréquence du modèle, qui représente des oscillations aux échelles microscopiques et macroscopiques, est un problème laissé ouvert. En particulier, les conditions aux limites de l’équation macroscopique à hautes fréquences établies dans [36] n’étaient pas connues avant le début de la thèse. Ce dernier travail apporte trois contributions principales. Les deux premières contributions, portent sur le comportement asymptotique du problème d’homogénéisation périodique du problème spectral et de l’équation des ondes en une dimension. La troisième contribution consiste en une extension du modèle du problème spectral posé dans une bande bi dimensionnelle et bornée. Le résultat d’homogénéisation comprend des effets de couche limite qui se produisent dans les conditions aux limites de l’équation macroscopique à haute fréquence
In this dissertation, we present the periodic homogenization of a spectral problem and the waveequation with periodic rapidly varying coefficients in a bounded domain. The asymptotic behavioris addressed based on a method of Bloch wave homogenization. It allows modeling both the lowand high frequency waves. The low frequency part is well-known and it is not a new point here.In the opposite, the high frequency part of the model, which represents oscillations occurringat the microscopic and macroscopic scales, was not well understood. Especially, the boundaryconditions of the high-frequency macroscopic equation established in [36] were not known prior to thecommencement of thesis. The latter brings three main contributions. The first two contributions, areabout the asymptotic behavior of the periodic homogenization of the spectral problem and waveequation in one-dimension. The third contribution consists in an extension of the model for thespectral problem to a thin two-dimensional bounded strip Ω = (0; _) _ (0; ") _ R2. The homogenizationresult includes boundary layer effects occurring in the boundary conditions of the high-frequencymacroscopic equation
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Leoputra, Wilson Suryajaya. "Video foreground extraction for mobile camera platforms". Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/1384.

Texto completo
Resumen
Foreground object detection is a fundamental task in computer vision with many applications in areas such as object tracking, event identification, and behavior analysis. Most conventional foreground object detection methods work only in a stable illumination environments using fixed cameras. In real-world applications, however, it is often the case that the algorithm needs to operate under the following challenging conditions: drastic lighting changes, object shape complexity, moving cameras, low frame capture rates, and low resolution images. This thesis presents four novel approaches for foreground object detection on real-world datasets using cameras deployed on moving vehicles.The first problem addresses passenger detection and tracking tasks for public transport buses investigating the problem of changing illumination conditions and low frame capture rates. Our approach integrates a stable SIFT (Scale Invariant Feature Transform) background seat modelling method with a human shape model into a weighted Bayesian framework to detect passengers. To deal with the problem of tracking multiple targets, we employ the Reversible Jump Monte Carlo Markov Chain tracking algorithm. Using the SVM classifier, the appearance transformation models capture changes in the appearance of the foreground objects across two consecutives frames under low frame rate conditions. In the second problem, we present a system for pedestrian detection involving scenes captured by a mobile bus surveillance system. It integrates scene localization, foreground-background separation, and pedestrian detection modules into a unified detection framework. The scene localization module performs a two stage clustering of the video data.In the first stage, SIFT Homography is applied to cluster frames in terms of their structural similarity, and the second stage further clusters these aligned frames according to consistency in illumination. This produces clusters of images that are differential in viewpoint and lighting. A kernel density estimation (KDE) technique for colour and gradient is then used to construct background models for each image cluster, which is further used to detect candidate foreground pixels. Finally, using a hierarchical template matching approach, pedestrians can be detected.In addition to the second problem, we present three direct pedestrian detection methods that extend the HOG (Histogram of Oriented Gradient) techniques (Dalal and Triggs, 2005) and provide a comparative evaluation of these approaches. The three approaches include: a) a new histogram feature, that is formed by the weighted sum of both the gradient magnitude and the filter responses from a set of elongated Gaussian filters (Leung and Malik, 2001) corresponding to the quantised orientation, which we refer to as the Histogram of Oriented Gradient Banks (HOGB) approach; b) the codebook based HOG feature with branch-and-bound (efficient subwindow search) algorithm (Lampert et al., 2008) and; c) the codebook based HOGB approach.In the third problem, a unified framework that combines 3D and 2D background modelling is proposed to detect scene changes using a camera mounted on a moving vehicle. The 3D scene is first reconstructed from a set of videos taken at different times. The 3D background modelling identifies inconsistent scene structures as foreground objects. For the 2D approach, foreground objects are detected using the spatio-temporal MRF algorithm. Finally, the 3D and 2D results are combined using morphological operations.The significance of these research is that it provides basic frameworks for automatic large-scale mobile surveillance applications and facilitates many higher-level applications such as object tracking and behaviour analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications". Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0037/document.

Texto completo
Resumen
Nous étudions ici l’intérêt des descripteurs locaux pour les images satellites optiques et radar. Ces descripteurs, par leurs invariances et leur représentation compacte, offrent un intérêt pour la comparaison d’images acquises dans des conditions différentes. Facilement applicables aux images optiques, ils offrent des performances limitées sur les images radar, en raison de leur fort bruit multiplicatif. Nous proposons ici un descripteur original pour la comparaison d’images radar. Cet algorithme, appelé SAR-SIFT, repose sur la même structure que l’algorithme SIFT (détection de points-clés et extraction de descripteurs) et offre des performances supérieures pour les images radar. Pour adapter ces étapes au bruit multiplicatif, nous avons développé un opérateur différentiel, le Gradient par Ratio, permettant de calculer une norme et une orientation du gradient robustes à ce type de bruit. Cet opérateur nous a permis de modifier les étapes de l’algorithme SIFT. Nous présentons aussi deux applications pour la télédétection basées sur les descripteurs. En premier, nous estimons une transformation globale entre deux images radar à l’aide de SAR-SIFT. L’estimation est réalisée à l’aide d’un algorithme RANSAC et en utilisant comme points homologues les points-clés mis en correspondance. Enfin nous avons mené une étude prospective sur l’utilisation des descripteurs pour la détection de changements en télédétection. La méthode proposée compare les densités de points-clés mis en correspondance aux densités de points-clés détectés pour mettre en évidence les zones de changement
We study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía