To see the other types of publications on this topic, follow the link: Fourier decomposition.

Dissertations / Theses on the topic 'Fourier decomposition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Fourier decomposition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Zhu Lin. "Speech synthesis via adaptive Fourier decomposition." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Zhi Xiong. "A revision of adaptive Fourier decomposition." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2590642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shi, Rong. "Applications of adaptive Fourier decomposition to financial data." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2592936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mai, Wei Xiong. "Time frequency distribution associated with adaptive Fourier decomposition and its variation." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2590643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Avetisyan, Zhirayr. "Mode decomposition and Fourier analysis of physical fields in homogeneous cosmology." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-107907.

Full text
Abstract:
In this work the methods of mode decomposition and Fourier analysis of quantum fields on curved spacetimes previously available mainly for the scalar fields on Friedman-Robertson-Walker spacetimes are extended to arbitrary vector fields on general spatially homogeneous spacetimes. This is done by developing a rigorous unified framework which incorporates mode decomposition, harmonic analysis and Fourier analysis. Explicit constructions are performed for a variety of situations arising in homogeneous cosmology. A number of results concerning classical and quantum fields known for very restricted situations are generalized to cover almost all cosmological models.
APA, Harvard, Vancouver, ISO, and other styles
6

Blackburn, John William Teasdale. "High-resolution Fourier transform ion cyclotron resonance mass spectrometry and nuclear magnetic resonance spectroscopy of humic substances." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31149.

Full text
Abstract:
Humic substances (HS) are described as a complex mixture of organic molecules formed by incomplete decomposition of plant, animal and microbial matter. They are found in soil, water and air and have many environmental roles, e.g. water retention and metal ion binding in soil. Despite their importance, the molecular composition of HS is poorly understood. This is mostly because of an inability to separate individual molecules from these complex mixtures and then characterise them by standard analytical methods such as NMR and MS. In order improve the understanding of these important mixtures I have studied them using a high-resolution analytical method, Fourier transform ion-cyclotron resonance mass spectrometry (FTICR MS). Initial efforts focussed on testing the, fast, automated data analysis of the large data sets produced. Two pieces of software were compared and the reliability of the formulae assigned by these was critically evaluated. This confident formula assignment was then applied to study the consequences of different ionisation and instrumental parameters on the mass spectra obtained. The use of laser desorption/ionisation (LDI) without the need to employ a matrix required in matrix assisted laser desorption/ionisation (MALDI) was explored. A comparison of LDI and electrospray ionisation (ESI) FTICR MS of natural organic matter samples showed that these methods ionise complementary sets of compounds. The LDI ionised compounds were characterised as aromatics or condensed aromatics and compounds belonging to lower oxygen classes (maximum number at O8), while ESI ionised higher oxygen classes (maximum number at O16) with a vast majority of compounds classified as aliphatic based on their modified aromaticity index. MALDI and LDI spectra produced very similar data with over 90% matching formulas implying that fragmentation is not caused by LDI, as taught previously. My work showed that to maximize the coverage by FTICR MS of the molecular space occupied by these complex mixtures, multiple ionization methods must be used. As a particularly convenient and readily deployable ionization technique, LDI should be included in standard analytical protocols for FTICR MS analysis of NOM. I have explored different parameters and experimental settings to obtain a fuller coverage of the molecular space of NOM, this showed that different experimental conditions enhance peak intensities in different m/z regions of the FTICR MS spectra and that information can be obtained outside of the narrow 200-700 m/z window. To gain chemical and structural information about humic substances beyond what is currently known, experiments aimed to label HS using different isotopes and at specific sites were developed and tested. Two methylation reactions were of particular interest. A methylation that selectively targeted carboxylic acid groups and incorporated deuterium in the form of CD3 groups. An international standard, Suwannee River fulvic acid, was methylated and analysed by high-resolution mass spectrometry in order to gain information on the number and distribution carboxylic acid groups. This proved challenging due to the reactivity of the unknown molecules being difficult to determine in advance. Additionally, the peak separation being reduced to as low as 1.5 mDa pushed the instrument resolution and assignment confidence to their limits. The second methylation method explored used 13CH3I, a nonselective agent reacting with any labile proton, particularly attaching 13CH3 groups to carboxylic, phenolic and alcoholic OH groups. I prepared a methylated sample of fulvic acid from a Red Moss raised bog (Balerno, near Edinburgh) ready for analyses by high field NMR. This investigation yielded structures of a number of phenolic compounds for the first time by NMR.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Xiaohua. "Anion-Peptide Adduct Formation and Decomposition As Studied by Fourier Transform Ion Cyclotron Resonance (FT-ICR) Mass Spectrometry." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1748.

Full text
Abstract:
A new “best match” match model has been developed to account for adduct formation on multiply charged peptides observed in negative ion electrospray mass spectrometry. To obtain a stable adduct, the model necessitates an approximate matching of apparent gas-phase basicity (GBapp) of a given proton bearing site on the peptide with the gas-phase basicity (GB) of the anion attaching at that site. Evidence supporting the model is derived from the fact that singly charged adducts were only observed for lower GB anions: HSO4-, I-, CF3COO-. Ions that have medium GBs (NO3-, Br-, H2PO4-) only form adducts having -2 charge states, whereas Cl- (higher GB) can form adducts having -3 charge states. Hydrogen bonds are the main interactions pertinent to the “Best Match” model, however, ion-ion interactions formed between peptides ([Glu]Fibrinopeptide B, Angiotensin I or [Asn1,Val5]-Angiotensin II) and low GB anions (ClO4- or HSO4-) have been established by CID-MS/MS. Evidence for ion-ion interactions comes especially from product ions formed during the first dissociation step, where, in addition to the expected loss of the anion or neutral acid, other product ions that require covalent bond cleavage (i.e., H2O or NH3 loss) are also observed. In this study, the “Best Match” model is further supported by the decomposition behavior of adducts formed when Na+/H+ exchange has occurred on peptides. Na+/H+ exchanges were found to occur preferentially at higher acidity sites. Without any Na+/H+ exchange, F- and CH3COO- can hardly form observable adducts with [Glu]Fibrinopeptide B. However, after multiple Na+/H+ exchanges, F- and CH3COO- do form stable adducts. This phenomenon can be rationalized by considering that Na+ cations serve to “block” the highly acidic sites, thereby forcing them to remain overall neutral. This leaves the less acidic protons available to match with higher GB anions. According to the "best match" model, high GB anions will match with high GBapp sites on the peptide, whereas low GB anions will match with low GBapp peptide sites. High charge states readily augment GBapp of the peptide (through-space effect). Na+/H+ exchanges substantially decrease GBapp by neutralizing charged sites, while slightly increasing intrinsic GBs by the inductive effect.
APA, Harvard, Vancouver, ISO, and other styles
8

Workalemahu, Tsegaselassie. "Singular Value Decomposition in Image Noise Filtering and Reconstruction." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/52.

Full text
Abstract:
The Singular Value Decomposition (SVD) has many applications in image processing. The SVD can be used to restore a corrupted image by separating significant information from the noise in the image data set. This thesis outlines broad applications that address current problems in digital image processing. In conjunction with SVD filtering, image compression using the SVD is discussed, including the process of reconstructing or estimating a rank reduced matrix representing the compressed image. Numerical plots and error measurement calculations are used to compare results of the two SVD image restoration techniques, as well as SVD image compression. The filtering methods assume that the images have been degraded by the application of a blurring function and the addition of noise. Finally, we present numerical experiments for the SVD restoration and compression to evaluate our computation.
APA, Harvard, Vancouver, ISO, and other styles
9

Eliasson, Bengt. "Numerical Vlasov–Maxwell Modelling of Space Plasma." Doctoral thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-2929.

Full text
Abstract:
The Vlasov equation describes the evolution of the distribution function of particles in phase space (x,v), where the particles interact with long-range forces, but where shortrange "collisional" forces are neglected. A space plasma consists of low-mass electrically charged particles, and therefore the most important long-range forces acting in the plasma are the Lorentz forces created by electromagnetic fields. What makes the numerical solution of the Vlasov equation a challenging task is that the fully three-dimensional problem leads to a partial differential equation in the six-dimensional phase space, plus time, making it hard even to store a discretised solution in a computer’s memory. Solutions to the Vlasov equation have also a tendency of becoming oscillatory in velocity space, due to free streaming terms (ballistic particles), in which steep gradients are created and problems of calculating the v (velocity) derivative of the function accurately increase with time. In the present thesis, the numerical treatment is limited to one- and two-dimensional systems, leading to solutions in two- and four-dimensional phase space, respectively, plus time. The numerical method developed is based on the technique of Fourier transforming the Vlasov equation in velocity space and then solving the resulting equation, in which the small-scale information in velocity space is removed through outgoing wave boundary conditions in the Fourier transformed velocity space. The Maxwell equations are rewritten in a form which conserves the divergences of the electric and magnetic fields, by means of the Lorentz potentials. The resulting equations are solved numerically by high order methods, reducing the need for numerical over-sampling of the problem. The algorithm has been implemented in Fortran 90, and the code for solving the one-dimensional Vlasov equation has been parallelised by the method of domain decomposition, and has been implemented using the Message Passing Interface (MPI) method. The code has been used to investigate linear and non-linear interaction between electromagnetic fields, plasma waves, and particles.
APA, Harvard, Vancouver, ISO, and other styles
10

Colom, i. Bernadich Miquel. "Measuring the Characteristic Sizes of Convection Structures in AGB Stars with Fourier Decomposition Analyses : the Stellar Intensity Analyzer (SIA) Pipeline." Thesis, Uppsala universitet, Teoretisk astrofysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414041.

Full text
Abstract:
Context. Theoretical studies predict that the length scale of convection in stellar atmospheres isproportional to the pressure scale height, which implies that giant and supergiant stars should have convection granules of sizes comparable to their radii. Numerical simulations and the observation of anisotropies on stellar discs agree well with this prediction. Aims. To measure the characteristic sizes of convection structures of models simulated with the CO5BOLD code, to look at how they vary between models and to study their limitations due to numerical resolution. Methods. Fourier analyses are performed to frames from the models to achieve spatial spectral power distributions which are averaged over time. The position of the main peak and the averagevalue of the wavevector are taken as indicators of these sizes. The general shape of the intensity map of the disc in the frame is fitted and subtracted so that it does not contaminate the Fourier analysis. Results. A general relationship of the convection granule size being more or less ten times larger than the pressure length scale is found. The expected wavevector value of the time-averaged spectral power distributions is higher than the position of the main peak. Loose increasing trends with the characteristic sizes by the pressure scale height increasing against stellar mass, radius, luminosity,temperature and gravity are found, while a decreasing trends are found with the radius and modelresolution. Bad resolution subtracts signals on the slope at the side of the main peak towards larger wavevector values and in extreme cases it creates spurious signal towards the end of the spectrum due to artifacts appearing on the frames. Conclusions. The wavevector position of the absolute maximum in the time-averaged spectral power distribution is the best measure of the most prominent sizes in the stellar surfaces. The proportionality constant between granule size and pressure length scale is of the same order ofmagnitude as the one in the literature, however, models present sizes larger than the ones expected, likely because the of prominent features do not correspond to convection granules but to larger features hovering above them. Further studies on models with higher resolution will help in drawing more conclusive results. Appendix. The SIA pipeline takes a set of time-dependent pictures of stellar disks and uses a Fourier Analysis to measure the characteristic sizes of their features and other useful quantities, such as standard deviations or the spatial power distributions of features. The main core of the pipeline consists in identifying the stellar disc in the frames and subtracting their signal from the spatial power distributions through a general fit of the disc intensity. To analyze a time sequence, the SIA pipeline requires at least two commands from the user. The first commandorders the SIA pipeline to read the .sav IDL data structure file where the frame sequence is stored and to produce another .sav file with information on the spectral power distributions, the second command orders the reading of such file to produce two more .sav files, one containing time-averaged size measurements and their deviations while the other breaking down time-dependant information and other arrays used for the calculations. The SIA pipeline has been entirely written in Interactive Data Language (IDL). Most of the procedures used here are original from the SIA pipeline, but a small handfull like ima3_distancetransform.pro, power2d1d.pro, extremum.pro and smooth2d.pro from Bernd Freytag and peaks.pro and compile opt.pro amongst others are actually external.

The report consists in two parts:

1.- The main project, where we apply our pipeline and get scientific results.

2.- The appendix, where a technical description of the pipeline is given.

APA, Harvard, Vancouver, ISO, and other styles
11

Avetisyan, Zhirayr [Verfasser], Rainer [Akademischer Betreuer] Verch, Rainer [Gutachter] Verch, and Claudio [Gutachter] Dappiaggi. "Mode decomposition and Fourier analysis of physical fields in homogeneous cosmology / Zhirayr Avetisyan ; Gutachter: Rainer Verch, Claudio Dappiaggi ; Betreuer: Rainer Verch." Leipzig : Universitätsbibliothek Leipzig, 2013. http://d-nb.info/1238366201/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Yirui S. M. Massachusetts Institute of Technology. "Understanding the pathway and mechanism of electrolyte decomposition on metal oxide surfaces in Li-ion batteries by in situ Fourier Transform Infrared Spectroscopy." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122227.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-75).
Understanding (electro)chemical reactions at the electrode-electrolyte interface (EEI) is crucial to promote the cycle life of lithium-ion batteries. In situ studies of EEI can provide new insights into reaction intermediates and soluble species not accessible by ex situ characterization of electrode surfaces. In this study, we developed an in situ Fourier Transform infrared spectroscopy (FTIR) method to investigate the (electro)chemical reactions at the interface between the electrolyte and composite positive electrode surface during charging. While ethyl methyl carbonate (EMC) and ethylene carbonate (EC) were stable against (electro)chemical oxidation on Pt up to 4.8 VL, dehydrogenation of both carbonates on the surface of LiNio.8Cooa.Mno.l02 (NMC81 1) electrodes was revealed by in situ FTIR spectra and density functional theory (DFT). Both solvents can dehydrogenate and form de-H EC and de-H EMC, respectively, with carbon atom binding to lattice oxygen and sticking on surface. De-H EC can further remove another hydrogen atom to form vinylene carbonate (VC) or bind together to form oligomers, both of which are soluble and hard to be accessed through ex-situ methods. In situ FTIR method successfully tracked detailed pathways of solvent decomposition on oxide surface, and electrochemical impedance spectroscopy (EIS) further confirmed the formation of a passivating layer from solvent decomposition on the surface. The impedance growth is oxide and solvation structure-dependent and it accounts for battery degrading. We finally proposed and verified multiple strategies to further improve the cycling stability of high-energy density positive electrode in Li-ion batteries.
by Yirui Zhang.
S.M.
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
13

Ditch, Benjamin D. "Thermal decomposition products testing with 1,1,1,2,2,4,5,5,5 nonafluoro-4-trifluoromethyl pentan-3-one (C6 F-ketone) during fire extinguishing." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0106103-152708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Johar, Jasmeet Singh. "An experimental investigation of the urea-water decomposition and selective catalytic reduction (SCR) of nitric oxides with urea using V2O5-WO3-TiO2 catalyst." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2595.

Full text
Abstract:
Two flow reactor studies, using an electrically heated laminar flow reactor over Vanadia based (V2O5-WO3/TiO2) honeycomb catalyst, were performed at 1 atm pressure and various temperatures. The experiments were conducted using simulated exhaust gas compositions for different exhaust gases. A quartz tube was used in order to establish inert conditions inside the reactor. The experiments utilized a Fourier transform infrared (FTIR) spectrometer in order to perform both qualitative and quantitative analysis of the reaction products. Urea-water solution decomposition was investigated over V2O5-WO3/TiO2 catalyst over the entire SCR temperature range using the temperature controlled flow reactor. The solution was preheated and then injected into pure nitrogen (N2) stream. The decomposition experiments were conducted with a number of oxygen (O2) compositions (0, 1, 10, and 15%) over the temperature range of 227oC to 477oC. The study showed ammonia (NH3), carbon-dioxide (CO2) and nitric oxide (NO) as the major products of decomposition along with other products such as nitrous oxide (N2O) and nitrogen dioxide (NO2). The selective catalytic reduction (SCR) of nitric oxide (NO) with urea-water solution over V2O5-WO3/TiO2 catalyst using a laboratory laminar-flow reactor was investigated. Urea-water solution was injected at a temperature higher than the vaporization temperature of water and the flow reactor temperature was varied from 127oC to 477oC. A FTIR spectrometer was used to determine the concentrations of the product species. The major products of SCR reduction were NH3, NO and CO2 along with the presence of other minor products NO2 and N2O. NO removal of up to 87% was observed. The aim of the urea-water decomposition experiments was to study the decomposition process as close to the SCR configuration as possible. The aim of the SCR experiments was to delineate the effect of various parameters including reaction temperature and O2 concentration on the reduction process. The SCR investigation showed that changing parameter values significantly affected the NO removal, the residual NH3 concentration, the temperature of the maximum NO reduction, and the temperature of complete NH3 conversion. In the presence of O2, the reaction temperature for maximum NO reduction was 377?C for ratio of 1.0.
APA, Harvard, Vancouver, ISO, and other styles
15

Koski, Antti E. "Rapid frequency estimation." Worcester, Mass. : Worcester Polytechnic Institute, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-032806-165036/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: DSS; ECM; SVD; Singular Value Decomposition; rapid frequency estimation; frequency estimation. Includes bibliographical references (leaves 174-177).
APA, Harvard, Vancouver, ISO, and other styles
16

COSTES, DENIS. "Etude de la transmission d'une onde ultrasonore bornee en espace et en temps a travers des interfaces planes par la methode de decomposition de fourier." Paris 6, 1996. http://www.theses.fr/1996PA066099.

Full text
Abstract:
Les travaux ont porte sur une methode de calcul de champs ultrasonores bornes en espace et en temps a travers une ou plusieurs interfaces refractantes. La theorie utilisee consiste en une decomposition du faisceau acoustique par la methode integrale de fourier portant sur les variables spatiales et temporelles. Une etude theorique et bibliographique est d'abord presentee, reprenant les differents developpements qui menent a l'expression integrale du champ acoustique. Plusieurs methodes de description de la repartition acoustique sont identifiees. A la suite de cela, une methode de numerique, validee experimentalement a ete developpee. Le logiciel permet de decrire les champs acoustiques generes par des traducteurs ultrasonores large bande a travers une interface liquide/solide. Des exemples d'applications de la simulation a des cas concrets rencontres en controle non destructif par ultrasons sont egalement presentes
APA, Harvard, Vancouver, ISO, and other styles
17

Xiao, Ying. "New tools for unsupervised learning." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52995.

Full text
Abstract:
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data. Such problems often arise in machine learning and statistics, but also in signal processing, theoretical computer science, and any number of quantitative scientific fields. The distinguishing feature of unsupervised learning is that there are no privileged variables or labels which are particularly informative, and thus the greatest challenge is often to differentiate between what is relevant or irrelevant in any particular dataset or problem. In the course of this thesis, we study a number of problems which span the breadth of unsupervised learning. We make progress in Gaussian mixtures, independent component analysis (where we solve the open problem of underdetermined ICA), and we formulate and solve a feature selection/dimension reduction model. Throughout, our goal is to give finite sample complexity bounds for our algorithms -- these are essentially the strongest type of quantitative bound that one can prove for such algorithms. Some of our algorithmic techniques turn out to be very efficient in practice as well. Our major technical tool is tensor spectral decomposition: tensors are generalisations of matrices, and often allow access to the "fine structure" of data. Thus, they are often the right tools for unravelling the hidden structure in an unsupervised learning setting. However, naive generalisations of matrix algorithms to tensors run into NP-hardness results almost immediately, and thus to solve our problems, we are obliged to develop two new tensor decompositions (with robust analyses) from scratch. Both of these decompositions are polynomial time, and can be viewed as efficient generalisations of PCA extended to tensors.
APA, Harvard, Vancouver, ISO, and other styles
18

Hauser, Michael A., and Wolfgang Hörmann. "The Generation of Stationary Gaussian Time Series." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/1630/1/document.pdf.

Full text
Abstract:
Three different algorithms for the generation of stationary Gaussian time series with given autocorrelation function are presented in this paper. The algorithms have already been suggested in the literature but are not well known and have never been compared before. Interrelations between the different methods, advantages and disadvantages with respect to speed and memory requirements and the range of autocorrelation functions for which the different methods are stable are discussed. The time-complexity of the algorithms and the comparisons of their implementations show that the method twice using the Fourier transform is by far the most efficient if time series of moderate or large length are generated. A tested C-code of the latter algorithm is included as this method is tricky to implement and very difficult to find in the literature. (We know only one reference, that gives a correct algorithm, but there the description is very short and no proof is included.) (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
19

Corteville, Dominique Maurice Richard [Verfasser], and Lothar [Akademischer Betreuer] Schad. "Optimization of the measurement sequence used for functional Fourier Decomposition Magnetic Resonance Imaging of the human lung at 1.5 Tesla / Dominique Maurice Richard Corteville ; Betreuer: Lothar Schad." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180500709/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Corteville, Dominique [Verfasser], and Lothar [Akademischer Betreuer] Schad. "Optimization of the measurement sequence used for functional Fourier Decomposition Magnetic Resonance Imaging of the human lung at 1.5 Tesla / Dominique Maurice Richard Corteville ; Betreuer: Lothar Schad." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:16-heidok-191692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Elias, Rana. "Pour l'évaluation des modifications des caractéristiques d'un système dynamique." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1107/document.

Full text
Abstract:
L'évaluation des modifications des caractéristiques d'un système dynamique non-stationnaire est étudiée suivant les modifications des paramètres modaux. Pour cela, nous étudions en premier l'obtention de ces paramètres, à l'aide des méthodes d'identification à partir des réponses vibratoires mesurées. Trois méthodes d'identification sont étudiées: la méthode de Décomposition Orthogonale Propre (POD), la méthode de Décomposition en Valeurs Singulières (SVD) et la méthode de Décomposition Orthogonale Régularisée (SOD). Ensuite, trois étapes sont considérées pour suivre les changements de masse des systèmes non-stationnaires à partir des variations des paramètres modaux: la localisation de l'instant du changement (étape 1), la détection de la position du changement (étape 2) et la quantification de la valeur du changement (étape 3). Pour l'étape 1, la transformée en ondelettes (TO) qui est une analyse temps-fréquence est appliquée. Ensuite, trois méthodes de détection de la position du changement de la masse sont développées dans l'étape 2. Enfin, la variation relative des fréquences propres est utilisée pour la quantification de la variation relative de la masse dans l'étape 3. Toutes ces méthodes ont été testées numériquement. De plus une maquette simplifiée de bâtiment a été instrumentée sous excitations de choc. Ces essais ont permis de valider les méthodes développées dans cette thèse
Modification of modal parameters is considered the main tool for the evaluation of characteristic changes of a non stationary dynamic system. Therefore, our first interest is to obtain these modal parameters from vibration measures using identification methods. Three methods are discussed here: Proper Orthogonal Decomposition (POD), Singular Value Decomposition (SVD) and Smooth Orthogonal Decomposition (SOD). Then, in order to evaluate the mass changes in non stationary systems, three steps are proposed: instant localization of mass changes (step 1), determination of geometrical location of the mass changes (step 2) and quantification of mass changes (step 3). The Wavelet transform (WT), considered to be a time-frequency analysis, is indented in step 1. In step 2, three methods for the detection of the position of the mass changes are developed. Finally, the relative variation of the natural frequencies of the system is used to evaluate the relative variation of the mass in step 3. The efficiency of these methods is verified by numerical tests. Moreover a building experimental model, instrumented with accelerometers, is studied in the case of after-shock vibrations. These experimental tests permit to validate the methods proposed in this thesis
APA, Harvard, Vancouver, ISO, and other styles
22

Michael, Simon. "A Comparison of Data Transformations in Image Denoising." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375715.

Full text
Abstract:
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
APA, Harvard, Vancouver, ISO, and other styles
23

Dohnal, Matěj. "Paralelizace ultrazvukových simulací s využitím lokální Fourierovy dekompozice." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234949.

Full text
Abstract:
This document introduces a brand new method of the 1D, 2D and 3D decomposition with the use of local Fourier basis, its implementation and comparison with the currently used global 1D domain decomposition. The new method was designed, implemented and tested primarily for future use in the simulation software called The k-Wave toolbox, but it can be applied in many other spectral methods. Compared to the global 1D domain decomposition, the Local Fourier decomposition is up to 3 times faster and more efficient thanks to lower inter-process communication, however it is a little inaccurate. The final part of the thesis discusses the limitations of the new method and also introduces best practices to use 3D Local Fourier decomposition to achieve both more speed and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
24

Van, Huyssteen Rudolph Hendrik. "Comparative evaluation of video watermarking techniques in the uncompressed domain." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71842.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Electronic watermarking is a method whereby information can be imperceptibly embedded into electronic media, while ideally being robust against common signal manipulations and intentional attacks to remove the embedded watermark. This study evaluates the characteristics of uncompressed video watermarking techniques in terms of visual characteristics, computational complexity and robustness against attacks and signal manipulations. The foundations of video watermarking are reviewed, followed by a survey of existing video watermarking techniques. Representative techniques from different watermarking categories are identified, implemented and evaluated. Existing image quality metrics are reviewed and extended to improve their performance when comparing these video watermarking techniques. A new metric for the evaluation of inter frame flicker in video sequences is then developed. A technique for possibly improving the robustness of the implemented discrete Fourier transform technique against rotation is then proposed. It is also shown that it is possible to reduce the computational complexity of watermarking techniques without affecting the quality of the original content, through a modified watermark embedding method. Possible future studies are then recommended with regards to further improving watermarking techniques against rotation.
AFRIKAANSE OPSOMMING: ’n Elektroniese watermerk is ’n metode waardeur inligting onmerkbaar in elektroniese media vasgelê kan word, met die doel dat dit bestand is teen algemene manipulasies en doelbewuste pogings om die watermerk te verwyder. In hierdie navorsing word die eienskappe van onsaamgeperste video watermerktegnieke ondersoek in terme van visuele eienskappe, berekeningskompleksiteit en weerstandigheid teen aanslae en seinmanipulasies. Die onderbou van video watermerktegnieke word bestudeer, gevolg deur ’n oorsig van reedsbestaande watermerktegnieke. Verteenwoordigende tegnieke vanuit verskillende watermerkkategorieë word geïdentifiseer, geïmplementeer en geëvalueer. Bestaande metodes vir die evaluering van beeldkwaliteite word bestudeer en uitgebrei om die werkverrigting van die tegnieke te verbeter, spesifiek vir die vergelyking van watermerktegnieke. ’n Nuwe stelsel vir die evaluering van tussenraampie flikkering in video’s word ook ontwikkel. ’n Tegniek vir die moontlike verbetering van die geïmplementeerde diskrete Fourier transform tegniek word voorgestel om die tegniek se bestandheid teen rotasie te verbeter. Daar word ook aangetoon dat dit moontlik is om die berekeningskompleksiteit van watermerktegnieke te verminder, sonder om die kwaliteit van die oorspronklike inhoud te beïnvloed, deur die gebruik van ’n verbeterde watermerkvasleggingsmetode. Laastens word aanbevelings vir verdere navorsing aangaande die verbetering van watermerktegnieke teen rotasie gemaak.
APA, Harvard, Vancouver, ISO, and other styles
25

Voskrebenzev, Andreas [Verfasser], Jens [Akademischer Betreuer] Vogel-Claussen, and Lilli [Akademischer Betreuer] Geworski. "Magnetic resonance imaging of the regional lung function with Fourier decomposition : method development, reproducibility assessment and application in patients after double lung transplant / Andreas Voskrebenzev ; Akademische Betreuer: Jens Vogel-Claussen, Lilli Geworski ; Diagnostische und Interventionelle Radiologie." Hannover : Bibliothek der Medizinischen Hochschule Hannover, 2018. http://d-nb.info/1159903506/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chiu, Jiawei. "Matrix probing, skeleton decompositions, and sparse Fourier transform." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/83691.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Mathematics, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 163-168).
In this thesis, we present three different randomized algorithms that help to solve matrices, compute low rank approximations and perform the Fast Fourier Transform. Matrix probing and its conditioning When a matrix A with n columns is known to be well approximated by a linear combination of basis matrices B1,... , Bp, we can apply A to a random vector and solve a linear system to recover this linear combination. The same technique can be used to obtain an approximation to A-1. A basic question is whether this linear system is well-conditioned. This is important for two reasons: a well-conditioned system means (1) we can invert it and (2) the error in the reconstruction can be controlled. In this paper, we show that if the Gram matrix of the Bj's is sufficiently well-conditioned and each Bj has a high numerical rank, then n [alpha] p log2 n will ensure that the linear system is well-conditioned with high probability. Our main application is probing linear operators with smooth pseudodifferential symbols such as the wave equation Hessian in seismic imaging. We also demonstrate numerically that matrix probing can produce good preconditioners for inverting elliptic operators in variable media. Skeleton decompositions in sublinear time A skeleton decomposition of a matrix A is any factorization of the form A:CZAR: where A:C comprises columns of A, and AR: comprises rows of A. In this paper, we investigate the conditions under which random sampling of C and R results in accurate skeleton decompositions. When the singular vectors (or more generally the generating vectors) are incoherent, we show that a simple algorithm returns an accurate skeleton in sublinear O(l3) time from l ~/- k logn rows and columns drawn uniformly at random, with an approximation error of the form O(n/l[sigma]k) where 0k is the k-th singular value of A. We discuss the crucial role that regularization plays in forming the middle matrix U as a pseudo-inverse of the restriction ARC of A to rows in R and columns in C. The proof methods enable the analysis of two alternative sublinear-time algorithms, based on the rank-revealing QR decomposition, which allow us to tighten the number of rows and/or columns sampled to k with an error bound proportional to [sigma]-k. Sparse Fourier transform using the matrix pencil method One of the major applications of the FFT is to compress frequency-sparse signals. Yet, FFT algorithms do not leverage on this sparsity. Say we want to perform the Fourier transform on [epsilon] E CN to obtain some [chi], which is known to be S-sparse with some additive noise. Even when S is small, FFT still takes O(N log N) time. In contrast, SFT (sparse Fourier transform) algorithms aim to run in Õ(S)-time ignoring log factors. Unfortunately, SFT algorithms are not widely used because they are faster than the FFT only when S << N. We hope to address this deficiency. In this work, we present the fastest known robust Õ(S)-time algorithm which can run up to 20 times faster than the current state-of-the-art algorithm AAFFT. The major new ingredient is a mode collision detector using the matrix pencil method. This enables us to do away with a time-consuming coefficient estimation loop, use a cheaper filter and take fewer samples of x. We also speed up a crucial basic operation of many SFT algorithms by halving the number of trigonometric computations. Our theory is however not complete. First, we prove that the collision detector works for a few classes of random signals. Second, we idealize the behavior of the collision detector and show that with good probability, our algorithm runs in O(S log 2 - log N) time and outputs a O(S)-sparse [chi]' such that [mathematical formula inserted] where [chi], is the best exact S-sparse approximation of [chi].
by Jiawei Chiu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
27

Oygur, Ozgur Sinan. "Semi Analytical Study Of Stress And Deformation Analysis Of Anisotropic Shells Of Revolution Including First Order Transverse Shear Deformation." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609870/index.pdf.

Full text
Abstract:
In this study, anisotropic shells of revolution subject to symmetric and unsymmetrical static loads are analysed. In derivation of governing equations to be used in the solution, first order transverse shear effects are included in the formulation. The governing equations can be listed as kinematic equations, constitutive equations, and equations of motion. The equations of motion are derived from Hamilton&rsquo
s principle, the constitutive equations are developed under the assumptions of the classical lamination theory and the kinematic equations are based on the Reissner-Naghdi linear shell theory. In the solution method, these governing equations are manipulated and written as a set called fundamental set of equations. In order to handle anisotropy and first order transverse shear deformations, the fundamental set of equations is transformed into 20 first order ordinary differential equations using finite exponential Fourier decomposition and then solved with multisegment method of integration, after reduction of the two-point boundary value problem to a series of initial value problems. The results are compared with finite element analysis results for a number of sample cases and good agreement is found. Case studies are performed for circular cylindrical shell and truncated spherical shell geometries. While reviewing the results, effects of temperature and pressure loads, both constant and variable throughout the shell, are discussed. Some drawbacks of the first order transverse shear deformation theory are exhibited.
APA, Harvard, Vancouver, ISO, and other styles
28

Yin, Zhi. "Espaces de Hardy en probabilités et analyse harmonique quantiques." Phd thesis, Université de Franche-Comté, 2012. http://tel.archives-ouvertes.fr/tel-00838496.

Full text
Abstract:
Cette thèse présente quelques résultats de la théorie des probabilités quantiques et de l'analyse harmonique à valeurs operateurs. La thèse est composée des trois parties.Dans la première partie, on démontre la décomposition atomique des espaces de Hardy de martingales non commutatives. On identifie aussi les interpolés complexes et réels entre les versions conditionnelles des espaces de Hardy et BMO de martingales non commutatives.La seconde partie est consacrée à l'étude des espaces de Hardy à valeurs opérateursvia la méthode d'ondellettes. Cette approche est similaire à celle du cas des martingales non commutatives. On démontre que ces espaces de Hardy sont équivalents à ceux étudiés par Tao Mei. Par conséquent, on donne une base explicite complètement inconditionnelle pour l'espace de Hardy H1(R), muni d'une structure d'espace d'opérateurs naturelle. La troisième partie porte sur l'analyse harmonique sur le tore quantique. On établit les inégalités maximales pour diverses moyennes de sommation des séries de Fourier définies sur le tore quantique et obtient les théorèmes de convergence ponctuelle correspondant. En particulier, on obtient un analogue non commutative du théorème classique de Stein sur les moyennes de Bochner-Riesz. Ensuite, on démontre que les multiplicateurs de Fourier complètement bornés sur le tore quantique coïncident à ceux définis sur le tore classique. Finalement, on présente la théorie des espaces de Hardy et montre que ces espaces possèdent les propriétés des espaces de Hardy usuels. En particulier, on établit la dualité entre H1 et BMO.
APA, Harvard, Vancouver, ISO, and other styles
29

Lund, Darren Scott. "Fourier Decompositions of Graphs with Symmetries and Equitable Partitions." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/8925.

Full text
Abstract:
We show that equitable partitions, which are generalizations of graph symmetries, and Fourier transforms are fundamentally related. For a partition of a graph's vertices we define a Fourier similarity transform of the graph's adjacency matrix built from the matrices used to carryout discrete Fourier transformations. We show that the matrix (graph) decomposes into a number of smaller matrices (graphs) under this transformation if and only if the partition is an equitable partition. To extend this result to directed graphs we define two new types of equitable partitions, equitable receiving and equitable transmitting partitions, and show that if a partition of a directed graph is both, then the graph's adjacency matrix will similarly decomposes under this transformation. Since the transformation we use is a similarity transform the collective eigenvalues of the resulting matrices (graphs) is the same as the eigenvalues of the original untransformed matrix (graph).
APA, Harvard, Vancouver, ISO, and other styles
30

Dufoyer, Adeline. "Signification physique et hydrologique de l'information spectrale contenue dans le signal hydrodynamique à l'exutoire des systèmes karstiques. Links between karst hydrogeological properties and statistical characteristics of spring discharge time series : a theoretical study." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR147.

Full text
Abstract:
À l’échelle mondiale, des enjeux socio-économiques majeurs reposent sur les systèmes karstiques, tant pour l’alimentation en eau potable des populations que pour l’exploitation des gisements auxquels ils sont associés. Ces systèmes sont définis, par De Marsily (1984), comme des milieux dans lesquels « l’hétérogénéité atteint son paroxysme » : les modelés physiques caractérisant le karst sont particulièrement variés et les réponses hydrodynamiques sont indéniablement complexes et non-linéaires. Aujourd’hui, il n’est toujours pas possible de définir précisément l’organisation des hétérogénéités de ces aquifères bien que de nombreuses études aient été menées sur leur fonctionnement global. Ces dernières s’appuient sur les seules informations généralement disponibles sur ces systèmes : les précipitations sur le bassin versant étudié, les débits relevés à l’exutoire, des suivis ponctuels de niveau d’eau, de concentration ou de turbidité, et des estimations locales de propriétés hydrodynamiques telles que la conductivité hydraulique de l’aquifère ou le coefficient d’emmagasinement. Dans ce contexte, les méthodes d’analyse de séries temporelles hydrologiques enregistrées à des exutoires karstiques ont été largement exploitées dans le but d’interpréter le fonctionnement hydro(géo)logique de ces hydrosystèmes. Ce travail de thèse a exploré les potentialités de ces méthodes d’analyse du signal hydrologique comme aide à l’interprétation et à l’inférence des caractéristiques physiques et hydrogéologiques (géométries de réseaux karstiques, échanges entre réseaux et matrice poreuse environnante, modes de recharge diffus ou concentré). L’étude a été basée sur le couplage entre la modélisation directe d’écoulements sur des systèmes karstiques synthétiques, et le traitement statistique et spectral des signaux simulés. Les domaines modélisés ont été construits avec différents degrés de complexité : depuis des cas arbitraires très simples à des cas complexes correspondant à des systèmes réalistes. Les écoulements dans ces réseaux synthétiques ont été simulés avec le modèle hydrogéologique à base physique développé par le BRGM, MARTHE (Thiéry, 2015), et plus particulièrement grâce au module « Drains-Conduits » permettant de coupler matrice, conduits karstiques et échanges entre ces deux entités. Les méthodes de traitement du signal utilisées ont permis de comparer les propriétés statistiques et spectrales d’un signal climatique (précipitations) avec celles des débits simulés, et de comprendre comment celles-ci sont dépendantes des propriétés hydrauliques et physiques imposées dans les domaines. Même si les fonctions d’auto- ou d’inter-corrélation peuvent présenter des comportements globalement similaires (e.g. « effets-mémoire » ou temps de décorrélation similaires pour des géométries karstiques ou propriétés d’échanges matrice-conduits différentes), des différences en apparence subtiles et statistiquement significatives permettent de distinguer les comportements plus ou moins karstiques des domaines modélisés. En domaine spectral, tous les modèles se différencient par leur comportement scalaire et notamment par différentes gammes d’invariance d’échelle temporelle. Celles-ci traduisent les différentes modalités de filtrage du signal d’entrée par l’hydrosystème, et caractérisent donc différentes modalités d’écoulement. L’utilisation de méthodes de décomposition par ondelettes discrètes permet finalement de reconstruire la variabilité hydrodynamique associée à ces modalités d’écoulement, achevant ainsi une décomposition statistique de l’hydrogramme à l’exutoire
Globally, major socio-economic challenges lie on karst systems, both for drinking water supply to populations and for deposits exploitation. Karst systems are defined, by De Marsily (1984), as environments in which "heterogeneity reaches its paroxysm": the physical patterns characterizing karst domains are particularly varied and the hydrodynamic responses are undeniably complex and non-linear. Today, it is still not possible to precisely define the heterogeneities organization of these aquifers although many studies have been carried out on their overall functioning. These studies are based on the only information generally available on these systems: precipitation on the studied watershed, flow rates recorded at the outlet, well-located monitoring of water levels, concentrations or turbidity, and local estimated hydrodynamic properties such as the hydraulic conductivity of the aquifer or the storage coefficient. In recent years, several works have shown the interest of applying correlative and spectral analyzes on time series recorded on karst systems in order to interpret them hydro(geo)logical functioning. This thesis work explored the potential of this hydrological signal analysis methods as an aid to the interpretation and inference of physical and hydrogeological characteristics (karst networks geometry, exchanges between conduit networks and the surrounding porous matrix, recharge mode including diffuse and point-source one). This study was based on a coupling approach of direct modeling flows through synthetic karst systems, and the statistical and spectral processing of these simulated signals. The modeled domains were built according to different complexity degrees: from very simple arbitrary cases to complex cases corresponding to realistic systems. The flows in these synthetic networks were simulated with the physics-based hydrogeological model developed by the BRGM, MARTHE (Thiéry, 2015), and more particularly thanks to the “Drains-Conduits” package allowing to couple matrix, karst conduits and exchanges between these two entities. The employed signal processing methods compared the statistical and spectral properties of a climatic signal (precipitations) with those of the simulated discharges. They also give an understanding of how these properties vary according to changes in the domains hydraulic and physical characteristics. Even if the auto- and cross-correlation functions may exhibit almost similar behaviors (i.e. similar “memory effects” or decorrelation times, for different karst networks or matrix/conduit exchange properties), subtle but statistically significant differences allow the distinction between the karstification degree of the modeled domains. The scalar behaviors, and more particularly the different ranges of time scale invariance, can be used to distinguish the models in the spectral domain. These spectral properties reflect the possible filtering of the input signal by the considered hydrosystem, and therefore express different flow kinetics. Using discrete wavelet decomposition methods ultimately allows to reconstruct the hydrodynamic variability associated with these flow kinetics, completing thus a statistical decomposition of the hydrograph at the outlet
APA, Harvard, Vancouver, ISO, and other styles
31

Nikl, Vojtěch. "Paralelizace ultrazvukových simulací pomocí 2D dekompozice." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-413351.

Full text
Abstract:
This thesis is a part of the k-Wave project, which is a toolbox for the simulation and reconstruction of acoustic wave felds and one of its main contributions is the planning of focused ultrasound surgeries (HIFU). One simulation can take tens of hours and about 60% of the simulation time is taken by the calculation of the 3D Fast Fourier transforms. Up until now the 3D FFT has been calculated purely by the FFTW library and its 1D decomposition, whose major limitation is the maximum number of employable cores. Therefore we introduce a new approach, called the 2D hybrid decomposition of the 3D FFT (HybridFFT), where we combine both MPI processes and OpenMP threads to reach as best performance as possible. On a low number of cores, on the order of a few hundreds, we are about as fast or slightly faster than FFTW and pure MPI 2D decomposition libraries (PFFT and P3DFFT). One of the best results was achieved on a 512^3FFT using 512 cores, where our hybrid version run 31ms, FFTW run 39ms and PFFT run 44ms. The most significant performance advantage should be seen when employing around 8-16 thousand cores, however we haven't had an access to a machine with such resources. Almost a linear scalability has been proven for up to 2048 employed cores.
APA, Harvard, Vancouver, ISO, and other styles
32

KOMATSU, LUIZ G. H. "Estudo comparativo de nanocompósitos de polipropileno modificado sob condições de envelhecimento ambiental e acelerado." reponame:Repositório Institucional do IPEN, 2016. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26380.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-06-22T12:34:18Z No. of bitstreams: 0
Made available in DSpace on 2016-06-22T12:34:18Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
33

Jízdný, Martin. "Vlastní tvary vírového proudění." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229641.

Full text
Abstract:
This thesis deals with study of dynamics of the swirling flow. The swirling flow occurs frequently in hydraulic machinery (e.g., vortex rope in draft tube of the hydraulic turbine) and often influences operation of these machines. For this reason, sufficient knowledge regarding this characteristic flow is necessary for subsequent improvement of hydraulic machines. The theoretical part of this thesis contains description of flow instabilities and their manifestations, notably Kármán vortex street and vortex rope. In the next part, two methods are applied to these two transient flows in order to identify their specific dynamic properties. The first method, Fourier transform, enables to find frequencies of transient flow. The second method, proper orthogonal decomposition (POD), enables to identify planar or spatial eigenmodes of a specific swirling flow. Proper orthogonal decomposition is used in this thesis to identify planar eigenmodes of Kármán vortex street and spatial eigenmodes of vortex rope.
APA, Harvard, Vancouver, ISO, and other styles
34

Gannac, Yves. "Alliages Fe-6,5%Si élaborés par solidification rapide sous atmosphère controlée : microstructure, propriétés magnétiques et comparaison avec des alliages Fe-Si industriels." Toulouse, INSA, 1992. http://www.theses.fr/1992ISAT0015.

Full text
Abstract:
La realisation d'un dispositif de solidification rapide sous atmosphere controlee permet l'elaboration de rubans fer-silicium avec un exces de silicium (6,5% masse) dont l'etat de surface est ameliore par rapport a celui de rubans trempes a l'air. Un bon controle de parametres de coulee conduit a l'elaboration de rubans de qualite et geometrie correctes et reproductibles. Les proprietes magnetiques a faible frequence (0-50 hz), placent ces alliages a un niveau legerement inferieur a celui des alliages industriels a grains orientes, mais nettement au-dessus des alliages a grains non orientes qui ont une texture voisine de celles des rubans. La chute des proprietes quand la frequence augmente n'est pas observee sur les rubans, conferant a ceux-ci des proprietes a 400 hz meilleures que celles des alliages industriels a grains orientes ou non. Pour un large domaine d'induction, des equations parametriques simples peuvent representer les courbes d'aimantation experimentales. L'etude de la decomposition en serie de fourier met en evidence divers aspects physiques qui influencent l'allure de ces courbes (mecanismes d'aimantation, courants induits)
APA, Harvard, Vancouver, ISO, and other styles
35

Tuleja, Peter. "Využití Hilbert Huangovy transformace pro analýzu nestacionárních signálů z fyzikálních experimentů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220614.

Full text
Abstract:
This paper discusses the possible use of Hilbert-Huang transform to analyze the data obtained from physical experiments. Specifically for the analysis of acoustic emission in the form of acoustic shock. The introductory section explains the concept of acoustic emission and its detection process. Subsequently are discussed methods for signal analysis in time-frequency domain. Specifically, short-term Fourier transform, Wavelet transform, Hilbert transform and Hilbert-Huang transform. The final part contains the proposed method for measuring the performance and accuracy of different approaches.
APA, Harvard, Vancouver, ISO, and other styles
36

Hargis, Brent H. "Analysis of Long-Term Utah Temperature Trends Using Hilbert-Haung Transforms." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/5490.

Full text
Abstract:
We analyzed long-term temperature trends in Utah using a relatively new signal processing method called Empirical Mode Decomposition (EMD). We evaluated the available weather records in Utah and selected 52 stations, which had records longer than 60 years, for analysis. We analyzed daily temperature data, both minimum and maximums, using the EMD method that decomposes non-stationary data (data with a trend) into periodic components and the underlying trend. Most decomposition algorithms require stationary data (no trend) with constant periods and temperature data do not meet these constraints. In addition to identifying the long-term trend, we also identified other periodic processes in the data. While the immediate goal of this research is to characterize long-term temperature trends and identify periodic processes and anomalies, these techniques can be applied to any time series data to characterize trends and identify anomalies. For example, this approach could be used to evaluate flow data in a river to separate the effects of dams or other regulatory structures from natural flow or to look at other water quality data over time to characterize the underlying trends and identify anomalies, and also identify periodic fluctuations in the data. If these periodic fluctuations can be associated with physical processes, the causes or drivers might be discovered helping to better understand the system. We used EMD to separate and analyze long-term temperature trends. This provides awareness and support to better evaluate the extremities of climate change. Using these methods we will be able to define many new aspects of nonlinear and nonstationary data. This research was successful and identified several areas in which it could be extended including data reconstruction for time periods missing data. This analysis tool can be applied to various other time series records.
APA, Harvard, Vancouver, ISO, and other styles
37

MARINI, FRANCESCO. "Wave induced hydro and morpho-dynamics around a single slender pile: experimental and numerical analysis." Doctoral thesis, Università Politecnica delle Marche, 2020. http://hdl.handle.net/11566/274622.

Full text
Abstract:
Negli ultimi anni, la presenza in mare di strutture fondate su pali, come piattaforme petrolifere e turbine eoliche offshore, sta diventando sempre più importante. Per questa ragione, guadagna sempre più importanza l’analisi di possibili criticismi e incertezze nella loro progettazione. L’obiettivo di questa tesi è lo studio dell’idro e della morfodinamica indotta da onde regolari e non su un palo di piccolo diametro attraverso una modellazione fisica e numerica. Questa tesi vuole quindi dare un contributo per la comprensione di tali processi fisici. In particolare, si vogliono fornire degli strumenti che possono risultare utili ad evitare valutazioni errate dei parametri che rientrano nella progettazione di tali strutture. Particolare attenzione è riservata alla valutazione della forza dell’azione ondosa sulla struttura, allo scavo generato alla sua base e al run-up sulla sua superficie. Un modello fisico a fondo rigido è stato realizzato per valutare quale fosse il miglior approccio per la stima della forza dovuta ad onde non lineari. Viene quindi proposta un’alternativa al classico metodo di Morison (1950) per il calcolo della forza totale con valori di velocità e accelerazione orbitale calcolati a partire da una serie temporale della superficie libera. Tale metodo consiste in una decomposizione spettrale di Fourier del segnale di livello; il contributo di ogni componente, calcolato mediante la teoria di Airy, viene quindi sommato per ottenere un profilo di velocità orizzontale e verticale in ogni istante. I risultati mostrano che questo approccio fornisce una buona stima della forza totale sia in termini di valore massimo che di valutazione della fase ondosa in cui avviene. Al contrario invece, l’applicazione della teoria lineare a tale scopo (che viene utilizzata usualmente nella progettazione) porta ad una sottostima del valore di picco e ad una cattiva rappresentazione dell’andamento della forza nel periodo. Lo scavo alla base del palo è un altro parametro fondamentale nella sua progettazione. Una campagna sperimentale a fondo mobile è stata quindi realizzata studiando l’effetto di onde regolari e non regolari su un fondale sabbioso in cui è stato inserito un cilindro verticale. I risultati di questo studio sono abbastanza in accordo con quelli ottenuti da un precedente la voro di Sumer et al. (1992) sebbene sia mostrato come il processo di scavo inizi in presenza di condizioni idrodinamiche di minore intensità (parametro di Keulegan-Carpenter KC=4) rispetto a quanto presente in letteratura (KC=6). Diverse formule sono proposte per valutare la profondità di scavo in funzione di parametri adimensionali come KC e Ur. Per lo scavo dovuto ad onde irregolari, è stato modificato l’approccio di Ong et al. (2013). Questo metodo dipende dalla funzione di distribuzione cumulata delle altezze di cresta ma, a causa della sua complessità, è stata semplificata per essere resa più adatta a fini progettuali senza però inficiare la bontà dei risultati. Per migliorare la comprensione sul processo di formazione, crescita e distacco dei vortici, sono state svolte misure di tipo PTV che, integrate con le misure di pressione e coi risultati del fondo mobile, hanno permesso di caratterizzare l’intero processo di scavo. Dai risultati si è osservato come le fasi e le posizioni in cui i gradienti di pressione sono massimi corrispondono a quelle in cui è stato osservato distacco dei vortici. Le mappe di scavo ottenute nella campagna a fondo mobile sono state quindi spiegate dall’interpretazione delle mappe di vorticity e del parametro OW. È stato inoltre realizzato un modello numerico con OpenFOAM per ampliare la comprensione del complesso fenomeno tridimensionale riguardante l’interazione onda-struttura. Una nuova condizione al contorno per la generazione ondosa a partire da una serie temporale misurata di superficie libera è stata implementata nel codice. Il campo di velocità viene calcolato con la scomposizione di Fourier sopra descritta. I test della campagna a fondo rigido sono stati simulati allo scopo di ottenere ulteriori informazioni sul distacco dei vortici e per stimare il run-up sulla struttura che è un parametro fondamentale per garantire l’accesso alle strutture in mare. Il confronto con le misure sperimentali di livello, velocità, pressione e forza è eccellente. L’analisi dei massimi valori di run-up è stata realizzata confrontando i risultati con formule presenti in letteratura. In particolare, l’equazione di Hallermeier (1976) è stata adattata per il calcolo del run-up a partire dall’altezza della cresta e i risultati, ottenuti con un numero di campioni significativo, sono molto buoni. Infine, le strutture vorticose sono visualizzate mediante l’uso del Q-criterion. I risultati numerici mostrano come, a volte, un vortice generato nella prima metà di un periodo può rimanere nell’area di influenza del palo. Dopo l’inversione del flusso cambia direzione insieme al flusso e viene finalmente rimosso nel lato del palo opposto rispetto a dove era stato generato. I risultati sono in accordo con quelli della campagna PTV: sia la fase, la dimensione e la posizione dei vortici sono ben rappresentate. Gli obiettivi raggiunti con il modello numerico aumentano la comprensione di tale complesso processo fisico unendo informazioni di diversa natura come i gradienti di pressioni, le strutture vorticose e le mappe di scavo ad esse associate.
During the last years, the presence of piled structures, such as offshore wind farms or oil platforms, in the marine environment is becoming more important. For this reason, it gains relevance the correct analysis of possible criticisms and uncertainties in the design process. The aim of this thesis is the study of both the hydrodynamics and the morpho-dynamics induced by regular and random waves over a single slender pile by means of laboratory and numerical modelling. Therefore, the objective of the present thesis is to contribute to a better understanding of such physical processes. In particular, it aims to provide some tools that can be useful in the design process, in order to avoid a wrong estimation of the most important parameters and, thus, to ensure a proper design of the piled marine structure and its facilities. Particular attention is paid on the evaluation of the total force due to wave action, on the scour at the base of the pile and on the run-up over its surface. A rigid bed model has been realised to evaluate the best approach for the estimation of the total force over the pile due to nonlinear waves. An alternative method to the classical Morison (1950)'s approach has been proposed for the computation of the total force from the velocity and acceleration of the water particles from a measured water surface elevation time series. Such method consists in a Fourier spectral decomposition of the input free water surface signal; the contribution of each component, computed according to the Airy theory, is summed to obtain the total horizontal and vertical velocity along the vertical profile. The results showed that this approach allowed to give a very good estimate of the total force both in terms of maximum value and of the phase in which it occurs. On the contrary, the application of the linear theory for the calculation of the velocity to be applied in the force calculation (classical Morison approach used in the design process) leads to an underestimation of the peak value and to a wrong representation of the shape of the force. Another primary parameter in the stability of a structure in the marine environment is the scour at the base of the monopile. This experimental campaign is performed by means of a mobile bed model in which a vertical cylinder is placed over a sandy seabed and it is attacked by both regular and random nonlinear waves. The results for regular waves are quite in agreement with the earlier work of Sumer et al. (1992). However, the results of this study reveal that the scour process due to nonlinear regular waves starts for hydrodynamic conditions characterized by a lower intensity Keulegan-Carpenter parameter KC=4) in comparison with those reported in the literature for linear waves (KC=6). Different formulas are proposed for its estimation depending on dimensionless parameters (KC, Ur). For random waves, the approach proposed by Ong et al. (2013) for scour evaluation has been modified. This method depends on the cumulative distribution function of the dimensionless crest heights but, because of its complexity, it has been simplified in order to became suitable for design purposes without affecting the quality of the results. To better understand the process of formation, growth and detachment of vortices, PTV measurements are carried out and the results are integrated with those from pressure sensors and from the mobile bed model for the characterization of the whole process of vortex formation and scour generation. The results showed that the wave phases and positions in which the maximum values of the pressure gradients occur, correspond to those in which the detachment of vortices is observed. From the interpretation of the contour maps of the vorticity and of the OW parameter it is possible to explain the scour patterns obtained in the mobile bed campaign. Furthermore, a numerical model has been realised with the tool OpenFOAM that permitted to contribute to the comprehension of the complex 3D physical phenomenon induced by the wave-structure interaction. A new wave generation boundary condition for the generation of a measured water surface elevation time series has been added to the numerical code. The corresponding velocity field in this BC has been computed with the Fourier decomposition method mentioned above. The nonlinear waves of the rigid bed experimental campaign are here simulated for the better comprehension of the vortex formation process and for the estimation of wave run-up which is very important for the design of the access facility of the marine structures. Excellent results are obtained in comparison with experimental data of force, pressure, velocity and water elevation. The analysis of the maximum values of run-up is performed and the results are compared with some formula available in the literature. An adaptation of the equation of Hallermeier (1976) that compute the run-up depending on the crest height, is proposed and the results, obtained with a significant number of samples, are very good. Finally, the visualization of the three-dimensional vortical structures by means of the Q-criterion has been performed. The numerical results showed that, in some instances, a vortex generated during the first half of the period can remain in the area of influence of the pile. After the reversal of the flow, it changes the direction according to the stream until it is finally removed on the opposite side of the pile with respect to where it was generated. The results are in agreement with those of the PTV campaign: both the phase, size and position of vortices are well represented. The achievements obtained with the numerical model are able to improve the understanding of the complex physical processes by linking the different results, such as the pressure gradients, vortical structures and the associated scour patterns.
APA, Harvard, Vancouver, ISO, and other styles
38

Issa, Nader. "Modes and propagation in microstructured optical fibres." University of Sydney. Physics and Optical Fibre Technology Centre, 2005. http://hdl.handle.net/2123/613.

Full text
Abstract:
Microstructured optical fibres (MOFs), also commonly called photonic crystal fibres or holey fibres, describe a type of optical fibre in which continuous channels of (typically) air run their entire length. These `holes' serve to both confine electromagnetic waves within the core of the fibre and to tailor its transmission properties. In order to understand and quantify both of these functions, a new computational algorithm was developed and implemented. It solves for the eigenvalues of Maxwell's wave equations in the two-dimensional waveguide cross-section, with radiating boundary conditions imposed outside the microstructure. This yields the leaky modes supported by the fibre. The boundary conditions are achieved exactly using a novel refinement scheme called the Adjustable Boundary Condition (ABC) method. Two implementations are programmed and their computational efficiencies are compared. Both use an azimuthal Fourier decomposition, but radially, a finite difference scheme is shown to be more efficient than a basis function expansion. The properties of the ABC method are then predicted theoretically using an original approach. It shows that the method is highly efficient, robust, automated and generally applicable to any implementation or to other radiating problems. A theoretical framework for the properties of modes in MOFs is also presented. It includes the use of the Bloch-Floquet theorem to provide a simpler and more efficient way to exploit microstructure symmetry. A new, but brief study of the modal birefringence properties in straight and spun fibres is also included. The theoretical and numerical tools are then applied to the study of polymer MOFs. Three types of fibres are numerically studied, fabricated and characterised. Each is of contemporary interest. Firstly, fabrication of the first MOFs with uniformly oriented elliptical holes is presented. A high degree of hole ellipticity is achieved using a simple technique relying on hole deformation during fibre draw. Both form and stress-optic birefringence are characterized over a broad scaled-wavelength range, which shows excellent agreement with numerical modelling. Secondly, an analysis of leaky modes in real air core MOFs, fabricated specifically for photonic band gap guidance, is then used to identify alternative guiding mechanisms. The supported leaky modes exhibit properties closely matching a simple hollow waveguide, weakly influenced by the surrounding microstructure. The analysis gives a quantitative determination of the wavelength dependent confinement loss of these modes and illustrates a mechanism not photonic band gap in origin by which colouration can be observed in such fibres. Finally, highly multimode MOFs (also called `air-clad' fibres) that have much wider light acceptance angles than conventional fibres are studied. An original and accurate method is presented for determining the numerical aperture of such fibres using leaky modes. The dependence on length, wavelength and various microstructure dimensions are evaluated for the first time for a class of fibres. These results show excellent agreement with published measurements on similar fibres and verify that bridge thicknesses much smaller than the wavelength are required for exceptionally high numerical apertures. The influence of multiple layers of holes on the numerical aperture and capture efficiency are then presented. It shows that a substantial increase in both these parameters can be achieved for some bridge thicknesses. Simple heuristic expressions for these quantities are given, which are based on the physical insight provided by the full numerical models. The work is then supported by the first fabrication attempts of large-core polymer MOFs with thin supporting bridges. These fibres exhibit relatively high numerical apertures and show good agreement with theoretical expectations over a very wide scaled-wavelength range.
APA, Harvard, Vancouver, ISO, and other styles
39

Kasaei, Shohreh. "Fingerprint analysis using wavelet transform with application to compression and feature extraction." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36053/7/36053_Digitised_Thesis.pdf.

Full text
Abstract:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
APA, Harvard, Vancouver, ISO, and other styles
40

Xi, Yunfei. "Synthesis, characterisation and application of organoclays." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16483/1/Yunfei_Xi_Thesis.pdf.

Full text
Abstract:
This thesis focuses on the synthesis and characterisation of organoclays. X-ray diffraction has been used to study the changes in the basal spacings of montmorillonite clay and surfactant-intercalated organoclays. Variation in the d-spacing was found to be a step function of the surfactant concentration. Three different molecular environments for surfactant octadecyltrimethylammonium bromide (ODTMA) within the surface-modified montmorillonite are proposed upon the basis of their different decomposition temperatures. High-resolution thermogravimetric analysis (HRTG) shows that the thermal decomposition of montmorillonite modified with ODTMA takes place in four steps attributing to dehydration of adsorbed water, dehydration of water hydrating metal cations, loss of surfactant and the loss of OH units respectively. In addition, it has shown that the decomposition procedure of DODMA and TOMA modified clays are very different from that of ODTMA modified ones. The surfactant decomposition takes place in several steps in the DODMA and TOMA modified clays while for ODTMA modified clays, it shows only one step for the decomposition of surfactant. Also TG was proved to be a useful tool to estimate the amount of surfactant within the organoclays. A model is proposed in which, up to 0.4 CEC, a surfactant monolayer is formed between the montmorillonite clay layers; up to 0.8 CEC, a lateral-bilayer arrangement is formed; and above 1.5 CEC, a pseudotrimolecular layer is formed, with excess surfactant adsorbed on the clay surface. While for dimethyldioctadecylammonium bromide (DODMA) and trioctadecylmethylammonium bromide (TOMA) modified clays, since the larger sizes of the surfactants, some layers of montmorillonite are kept unaltered because of steric effects. The configurations of surfactant within these organoclays usually take paraffin type layers. Thermal analysis also provides an indication of the thermal stability of the organoclay as shown by different starting decomposition temperatures. FTIR was used as a guide to determine the phase state of the organoclay interlayers as determined from the CH asymmetric stretching vibration of the surfactants to provide more information on surfactant configurations. It was used to study the changes in the spectra of the surfactant ODTMA upon intercalation into a sodium montmorillonite. Surfaces of montmorillonites were modified using ultrasonic and hydrothermal methods through the intercalation and adsorption of the cationic surfactant ODTMA. Changes in the surfaces and structure were characterized using electron microscopy. The ultrasonic preparation method results in a higher surfactant concentration within the montmorillonite interlayer when compared with that from the hydrothermal method. Both XRD patterns and TEM images demonstrate that SWy-2-Namontmorillonite contains superlayers. TEM images of organoclays prepared at high surfactant concentrations show alternate basal spacings between neighboring layers. SEM images show that modification with surfactant will reduce the clay particle aggregation. Organoclays prepared at low surfactant concentration display curved flakes, whereas they become flat with increasing intercalated surfactant. Fundamentally this thesis has increased the knowledge base of the structural and morphological properties of organo-montmorillonite clays. The configurations of surfactant in the organoclays have been further investigated and three different molecular environments for surfactant ODTMA within the surface-modified montmorillonite are proposed upon the basis of their different decomposition temperatures. Changes in the spectra of the surfactant upon intercalation into clay have been investigated in details. Novel surfactant-modified montmorillonite results in the formation of new nanophases with the potential for the removal of organic contaminants from aqueous media and for the removal of hydrocarbon spills on roads.
APA, Harvard, Vancouver, ISO, and other styles
41

Xi, Yunfei. "Synthesis, characterisation and application of organoclays." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16483/.

Full text
Abstract:
This thesis focuses on the synthesis and characterisation of organoclays. X-ray diffraction has been used to study the changes in the basal spacings of montmorillonite clay and surfactant-intercalated organoclays. Variation in the d-spacing was found to be a step function of the surfactant concentration. Three different molecular environments for surfactant octadecyltrimethylammonium bromide (ODTMA) within the surface-modified montmorillonite are proposed upon the basis of their different decomposition temperatures. High-resolution thermogravimetric analysis (HRTG) shows that the thermal decomposition of montmorillonite modified with ODTMA takes place in four steps attributing to dehydration of adsorbed water, dehydration of water hydrating metal cations, loss of surfactant and the loss of OH units respectively. In addition, it has shown that the decomposition procedure of DODMA and TOMA modified clays are very different from that of ODTMA modified ones. The surfactant decomposition takes place in several steps in the DODMA and TOMA modified clays while for ODTMA modified clays, it shows only one step for the decomposition of surfactant. Also TG was proved to be a useful tool to estimate the amount of surfactant within the organoclays. A model is proposed in which, up to 0.4 CEC, a surfactant monolayer is formed between the montmorillonite clay layers; up to 0.8 CEC, a lateral-bilayer arrangement is formed; and above 1.5 CEC, a pseudotrimolecular layer is formed, with excess surfactant adsorbed on the clay surface. While for dimethyldioctadecylammonium bromide (DODMA) and trioctadecylmethylammonium bromide (TOMA) modified clays, since the larger sizes of the surfactants, some layers of montmorillonite are kept unaltered because of steric effects. The configurations of surfactant within these organoclays usually take paraffin type layers. Thermal analysis also provides an indication of the thermal stability of the organoclay as shown by different starting decomposition temperatures. FTIR was used as a guide to determine the phase state of the organoclay interlayers as determined from the CH asymmetric stretching vibration of the surfactants to provide more information on surfactant configurations. It was used to study the changes in the spectra of the surfactant ODTMA upon intercalation into a sodium montmorillonite. Surfaces of montmorillonites were modified using ultrasonic and hydrothermal methods through the intercalation and adsorption of the cationic surfactant ODTMA. Changes in the surfaces and structure were characterized using electron microscopy. The ultrasonic preparation method results in a higher surfactant concentration within the montmorillonite interlayer when compared with that from the hydrothermal method. Both XRD patterns and TEM images demonstrate that SWy-2-Namontmorillonite contains superlayers. TEM images of organoclays prepared at high surfactant concentrations show alternate basal spacings between neighboring layers. SEM images show that modification with surfactant will reduce the clay particle aggregation. Organoclays prepared at low surfactant concentration display curved flakes, whereas they become flat with increasing intercalated surfactant. Fundamentally this thesis has increased the knowledge base of the structural and morphological properties of organo-montmorillonite clays. The configurations of surfactant in the organoclays have been further investigated and three different molecular environments for surfactant ODTMA within the surface-modified montmorillonite are proposed upon the basis of their different decomposition temperatures. Changes in the spectra of the surfactant upon intercalation into clay have been investigated in details. Novel surfactant-modified montmorillonite results in the formation of new nanophases with the potential for the removal of organic contaminants from aqueous media and for the removal of hydrocarbon spills on roads.
APA, Harvard, Vancouver, ISO, and other styles
42

Al, Takash Ahmad. "Development of Numerical Methods to Accelerate the Prediction of the Behavior of Multiphysics under Cyclic Loading." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2018. http://www.theses.fr/2018ESMA0014/document.

Full text
Abstract:
La réduction du temps de calcul lors de la résolution de problèmes d’évolution dans le cadre du calcul de structure constitue un enjeu majeur pour, par exemple, la mise en place de critères de rupture des pièces dans le secteur de l’aéronautique et de l’automobile. En particulier, la prédiction du cycle stabilisé des polymères sollicités sous chargement cyclique nécessite de résoudre un problème thermo-viscoélastique à grand nombre de cycles. La présence de différentes échelles de temps telles que le temps de relaxation (viscosité), le temps caractéristique associé au problème thermique et le temps du cycle de chargement conduit à un temps de calcul significatif lorsqu’un schéma incrémental est utilisé comme c’est le cas avec la méthode des éléments finis (MEF). De plus, un nombre important de données doit être stocké (au moins à chaque cycle). L’objectif de cette thèse est de proposer de nouvelles méthodes ainsi que d’étendre des méthodes existantes. Il est choisi de résoudre un problème thermique transitoire cyclique impliquant différentes échelles de temps avec l’objectif de réduire le temps de calcul réduit. Les méthodes proposées font partie des méthodes de réduction de modèles. Tout d’abord, la méthode de décomposition propre généralisée(PGD) a été étendue à un problème transitoire cyclique 3D non linéaire, la non-linéarité a été traitée en combinant la méthode PGD à la Méthode d’interpolation empirique discrète (DEIM), stratégie numérique déjà proposée dans la littérature. Les résultats ont montré l’efficacité de la PGD pour générer des résultats précis par rapport à la solution FEM avec une erreur relative inférieure à (1%). Ensuite, afin de réduire le temps de calcul, une autre approche alternative a été développée. Cette approche est basée sur l’utilisation d’une collection de modes, les modes les plus significatifs, issus de solutions PGD pour différentes échelles de temps et différentes valeurs de paramètres. Un dictionnaire regroupant ces modes est alors utilisé pour construire des solutions pour différents temps caractéristiques et différentes conditions aux limites, uniquement par projection de la solution sur les modes du dictionnaire. Cette approche a été adaptée pour traiter un problème faiblement couplé diffuso-thermique. La nouveauté de cette approche est de considérer un dictionnaire composé de bases spatio-temporelles et non pas uniquement de bases spatiales comme dans la fameuse méthode POD. Les résultats obtenus avec cette approche sont précis et permettent une réduction notable du temps de calcul on line. Néanmoins, lorsque différents temps de cycles sont considérés, le nombre de modes dans le dictionnaire augmente, ce qui en limite son utilisation. Afin de pallier cette limitation,une troisième stratégie numérique est proposée dans cette thèse. Elle consiste à considérer comme a priori connues des bases temporelles, elle est appelée stratégie mixte. L’originalité dans cette approche réside dans la construction de la base temporelle a prior basée sur l’analyse de Fourier de différentes simulations pour différents temps et différentes valeurs des paramètres. Une fois cette étude réalisée, une expression analytique des bases temporelles fonction des paramètres tels que le temps caractéristique et le temps du cycle est proposée. Les bases spatiales associées sont calculées à l’aide d’un algorithme type PGD. Cette méthode est ensuite testée pour la résolution de problèmes thermiques 3D sous chargement cyclique linéaires et non linéaires et un problème faiblement couplé thermo-diffusion
In the framework of structural calculation, the reduction of computation time plays an important rolein the proposition of failure criteria in the aeronautic and automobile domains. Particularly, the prediction of the stabilized cycle of polymer under cyclic loading requires solving of a thermo-viscoelastic problem with a high number of cycles. The presence of different time scales, such as relaxation time (viscosity), thermal characteristic time (thermal), and the cycle time (loading) lead to a huge computation time when an incremental scheme is used such as with the Finite Element Method (FEM).In addition, an allocation of memory will be used for data storage. The objective of this thesis isto propose new techniques and to extend existent ones. A transient thermal problem with different time scales is considered in the aim of computation time reduction. The proposed methods are called model reduction methods. First, the Proper Generalized Decomposition method (PGD) was extended to a nonlinear transient cyclic 3D problems. The non-linearity was considered by combining the PGD method with the Discrete Empirical Interpolation Method (DEIM), a numerical strategy used in the literature. Results showed the efficiency of the PGD in generating accurate results compared to the FEM solution with a relative error less than 1%. Then, a second approach was developed in order to reduce the computation time. It was based on the collection of the significant modes calculated from the PGD method for different time scales. A dictionary assembling these modes is then used to calculate the solution for different characteristic times and different boundary conditions. This approach was adapted in the case of a weak coupled diffusion thermal problem. The novelty of this method is to consider a dictionary composed of spatio-temporal bases and not spatial only as usedin the POD. The results showed again an exact reproduction of the solution in addition to a huge time reduction. However, when different cycle times are considered, the number of modes increases which limits the usage of the approach. To overcome this limitation, a third numerical strategy is proposed in this thesis. It consists in considering a priori known time bases and is called the mixed strategy. The originality in this approach lies in the construction of a priori time basis based on the Fourier analysis of different simulations for different time scales and different values of parameters.Once this study is done, an analytical expression of time bases based on parameters such as the characteristic time and the cycle time is proposed. The related spatial bases are calculated using the PGD algorithm. This method is then tested for the resolution of 3D thermal problems under cyclic loading linear and nonlinear and a weak coupled diffusion thermal problem
APA, Harvard, Vancouver, ISO, and other styles
43

Cao, Hongmei. "Problémes bien-posés et étude qualitative pour des équations cinétiques et des équations dissipatives." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR044/document.

Full text
Abstract:
Dans cette thèse, nous étudions certaines équations différentielles partielles avec mécanisme dissipatif, telles que l'équation de Boltzmann, l'équation de Landau et certains systèmes hyperboliques symétriques avec type de dissipation. L'existence globale de solutions ou les taux de dégradation optimaux des solutions pour ces systèmes sont envisagées dans les espaces de Sobolev ou de Besov. Les propriétés de lissage des solutions sont également étudiées. Dans cette thèse, nous prouvons principalement les quatre suivants résultats, voir les chapitres 3-6 pour plus de détails. Pour le premier résultat, nous étudions le problème de Cauchy pour le non linéaire inhomogène équation de Landau avec des molécules Maxwelliennes (= 0). Voir des résultats connus pour l'équation de Boltzmann et l'équation de Landau, leur existence globale de solutions est principalement prouvée dans certains espaces de Sobolev (pondérés) et nécessite un indice de régularité élevé, voir Guo [62], une série d'oeuvres d'Alexander Morimoto-Ukai-Xu-Yang [5, 6, 7, 9] et des références à ce sujet. Récemment, Duan-Liu-Xu [52] et Morimoto-Sakamoto [145] ont obtenu les résultats de l'existence globale de solutions à l'équation de Boltzmann dans l'espace critique de Besov. Motivés par leurs oeuvres, nous établissons l'existence globale de la solution dans des espaces de Besov spatialement critiques dans le cadre de perturbation. Précisément, si le datum initial est une petite perturbation de la distribution d'équilibre dans l'espace Chemin-Lerner eL 2v (B3=2 2;1 ), alors le problème de Cauchy de Landau admet qu'une solution globale appartient à eL 1t eL 2v (B3=2 2;1 ). Notre résultat améliore le résultat dans [62] et étend le résultat d'existence globale de l'équation de Boltzmann dans [52, 145] à l'équation de Landau. Deuxièmement, nous considérons le problème de Cauchy pour l'équation de Kac non-coupée spatialement inhomogène. Lerner-Morimoto-Pravda-Starov-Xu a considéré l'équation de Kac non-coupée spatialement inhomogène dans les espaces de Sobolev et a montré que le problème de Cauchy pour la fluctuation autour de la distribution maxwellienne admise S 1+ 1 2s 1+ 1 2s Propriétés de régularité Gelfand-Shilov par rapport à la variable de vélocité et propriétés de régularisation G1+ 1 2s Gevrey à la variable de position. Et les auteurs ont supposé qu'il restait encore à déterminer si les indices de régularité 1 + 1 2s étaient nets ou non. Dans cette thèse, si la donnée initiale appartient à l'espace de Besov spatialement critique, nous pouvons prouver que l'équation de Kac inhomogène est bien posée dans un cadre de perturbation. De plus, il est montré que la solution bénéficie des propriétés de régularisation de Gelfand-Shilov en ce qui concerne la variable de vitesse et des propriétés de régularisation de Gevrey en ce qui concerne la variable de position. Dans notre thèse, l'indice de régularité de Gelfand-Shilov est amélioré pour être optimal. Et ce résultat est le premier qui présente un effet de lissage pour l'équation cinétique dans les espaces de Besov. A propos du troisième résultat, nous considérons les équations de Navier-Stokes-Maxwell compressibles apparaissant dans la physique des plasmas, qui est un exemple concret de systèmes composites hyperboliques-paraboliques à dissipation non symétrique. On observe que le problème de Cauchy pour les équations de Navier-Stokes-Maxwell admet le mécanisme dissipatif de type perte de régularité. Par conséquent, une régularité plus élevée est généralement nécessaire pour obtenir le taux de dégradation optimal de L1(R3)-L2(R3) type, en comparaison avec cela pour l'existence globale dans le temps de solutions lisses
In this thesis, we study some kinetic equations and some partial differential equations with dissipative mechanism, such as Boltzmann equation, Landau equation and some non-symmetric hyperbolic systems with dissipation type. Global existence of solutions or optimal decay rates of solutions for these systems are considered in Sobolev spaces or Besov spaces. Also the smoothing properties of solutions are studied. In this thesis, we mainly prove the following four results, see Chapters 3-6 for more details. For the _rst result, we investigate the Cauchy problem for the inhomogeneous nonlinear Landau equation with Maxwellian molecules ( = 0). See from some known results for Boltzmann equation and Landau equation, their global existence of solutions are mainly proved in some (weighted) Sobolev spaces and require a high regularity index, see Guo [62], a series works of Alexandre-Morimoto-Ukai-Xu-Yang [5, 6, 7, 9] and references therein. Recently, Duan-Liu-Xu [52] and Morimoto-Sakamoto [145] obtained the global existence results of solutions to the Boltzmann equation in critical Besov spaces. Motivated by their works, we establish the global existence of solutions for Landau equation in spatially critical Besov spaces in perturbation framework. Precisely, if the initial datum is a small perturbation of the equilibrium distribution in the Chemin-Lerner space eL 2v (B3=2 2;1 ), then the Cauchy problem of Landau equation admits a global solution belongs to eL 1t eL 2v (B3=2 2;1 ). Our results improve the result in [62] and extend the global existence result for Boltzmann equation in [52, 145] to Landau equation. Secondly, we consider the Cauchy problem for the spatially nhomogeneous non-cuto_ Kac equation. Lerner-Morimoto-Pravda-Starov-Xu [117] considered the spatially inhomogeneous non-cuto_ Kac equation in Sobolev spaces and showed that the Cauchy problem for the uctuation around the Maxwellian distribution admitted S 1+ 1 2s 1+ 1 2s Gelfand-Shilov regularity properties with respect to the velocity variable and G1+ 1 2s Gevrey regularizing properties with respect to the position variable. And the authors conjectured that it remained still open to determine whether the regularity indices 1+ 1 2s is sharp or not. In this thesis, if the initial datum belongs to the spatially critical Besov space eL 2v (B1=2 2;1 ), we prove the well-posedness to the inhomogeneous Kac equation under a perturbation framework. Furthermore, it is shown that the weak solution enjoys S 3s+1 2s(s+1) 3s+1 2s(s+1) Gelfand-Shilov regularizing properties with respect to the velocity variableand G1+ 1 2s Gevrey regularizing properties with respect to the position variable. In our results, the Gelfand-Shilov regularity index is improved to be optimal. And this result is the _rst one that exhibits smoothing e_ect for the kinetic equation in Besov spaces. About the third result, we consider compressible Navier-Stokes-Maxwell equations arising in plasmas physics, which is a concrete example of hyperbolic-parabolic composite systems with non-symmetric dissipation. It is observed that the Cauchy problem for Navier-Stokes-Maxwell equations admits the dissipative mechanism of regularity-loss type. Consequently, extra higher regularity is usually needed to obtain the optimal decay rate of L1(R3)-L2(R3) type, in comparison with that for the global-in-time existence of smooth solutions
APA, Harvard, Vancouver, ISO, and other styles
44

Ayvazyan, Vigen. "Etude de champs de température séparables avec une double décomposition en valeurs singulières : quelques applications à la caractérisation des propriétés thermophysiques des matérieux et au contrôle non destructif." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14671/document.

Full text
Abstract:
La thermographie infrarouge est une méthode largement employée pour la caractérisation des propriétés thermophysiques des matériaux. L’avènement des diodes laser pratiques, peu onéreuses et aux multiples caractéristiques, étendent les possibilités métrologiques des caméras infrarouges et mettent à disposition un ensemble de nouveaux outils puissants pour la caractérisation thermique et le contrôle non desturctif. Cependant, un lot de nouvelles difficultés doit être surmonté, comme le traitement d’une grande quantité de données bruitées et la faible sensibilité de ces données aux paramètres recherchés. Cela oblige de revisiter les méthodes de traitement du signal existantes, d’adopter de nouveaux outils mathématiques sophistiqués pour la compression de données et le traitement d’informations pertinentes. Les nouvelles stratégies consistent à utiliser des transformations orthogonales du signal comme outils de compression préalable de données, de réduction et maîtrise du bruit de mesure. L’analyse de sensibilité, basée sur l’étude locale des corrélations entre les dérivées partielles du signal expérimental, complète ces nouvelles approches. L'analogie avec la théorie dans l'espace de Fourier a permis d'apporter de nouveaux éléments de réponse pour mieux cerner la «physique» des approches modales.La réponse au point source impulsionnel a été revisitée de manière numérique et expérimentale. En utilisant la séparabilité des champs de température nous avons proposé une nouvelle méthode d'inversion basée sur une double décomposition en valeurs singulières du signal expérimental. Cette méthode par rapport aux précédentes, permet de tenir compte de la diffusion bi ou tridimensionnelle et offre ainsi une meilleure exploitation du contenu spatial des images infrarouges. Des exemples numériques et expérimentaux nous ont permis de valider dans une première approche cette nouvelle méthode d'estimation pour la caractérisation de diffusivités thermiques longitudinales. Des applications dans le domaine du contrôle non destructif des matériaux sont également proposées. Une ancienne problématique qui consiste à retrouver les champs de température initiaux à partir de données bruitées a été abordée sous un nouveau jour. La nécessité de connaitre les diffusivités thermiques du matériau orthotrope et la prise en compte des transferts souvent tridimensionnels sont complexes à gérer. L'application de la double décomposition en valeurs singulières a permis d'obtenir des résultats intéressants compte tenu de la simplicité de la méthode. En effet, les méthodes modales sont basées sur des approches statistiques de traitement d'une grande quantité de données, censément plus robustes quant au bruit de mesure, comme cela a pu être observé
Infrared thermography is a widely used method for characterization of thermophysical properties of materials. The advent of the laser diodes, which are handy, inexpensive, with a broad spectrum of characteristics, extend metrological possibilities of infrared cameras and provide a combination of new powerful tools for thermal characterization and non destructive evaluation. However, this new dynamic has also brought numerous difficulties that must be overcome, such as high volume noisy data processing and low sensitivity to estimated parameters of such data. This requires revisiting the existing methods of signal processing, adopting new sophisticated mathematical tools for data compression and processing of relevant information.New strategies consist in using orthogonal transforms of the signal as a prior data compression tools, which allow noise reduction and control over it. Correlation analysis, based on the local cerrelation study between partial derivatives of the experimental signal, completes these new strategies. A theoretical analogy in Fourier space has been performed in order to better understand the «physical» meaning of modal approaches.The response to the instantaneous point source of heat, has been revisited both numerically and experimentally. By using separable temperature fields, a new inversion technique based on a double singular value decomposition of experimental signal has been introduced. In comparison with previous methods, it takes into account two or three-dimensional heat diffusion and therefore offers a better exploitation of the spatial content of infrared images. Numerical and experimental examples have allowed us to validate in the first approach our new estimation method of longitudinal thermal diffusivities. Non destructive testing applications based on the new technique have also been introduced.An old issue, which consists in determining the initial temperature field from noisy data, has been approached in a new light. The necessity to know the thermal diffusivities of an orthotropic medium and the need to take into account often three-dimensional heat transfer, are complicated issues. The implementation of the double singular value decomposition allowed us to achieve interesting results according to its ease of use. Indeed, modal approaches are statistical methods based on high volume data processing, supposedly robust as to the measurement noise
APA, Harvard, Vancouver, ISO, and other styles
45

Avetisyan, Zhirayr. "Mode decomposition and Fourier analysis of physical fields in homogeneous cosmology." Doctoral thesis, 2012. https://ul.qucosa.de/id/qucosa%3A11872.

Full text
Abstract:
In this work the methods of mode decomposition and Fourier analysis of quantum fields on curved spacetimes previously available mainly for the scalar fields on Friedman-Robertson-Walker spacetimes are extended to arbitrary vector fields on general spatially homogeneous spacetimes. This is done by developing a rigorous unified framework which incorporates mode decomposition, harmonic analysis and Fourier analysis. Explicit constructions are performed for a variety of situations arising in homogeneous cosmology. A number of results concerning classical and quantum fields known for very restricted situations are generalized to cover almost all cosmological models.
APA, Harvard, Vancouver, ISO, and other styles
46

Hoang, Thai Duy. "Fourier and Variational Based Approaches for Fingerprint Segmentation." Doctoral thesis, 2015. http://hdl.handle.net/11858/00-1735-0000-0022-5FEF-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Goldstein, Hilton. "The investigation into an algorithm based on wavelet basis functions for the spatial and frequency decomposition of arbitrary signals." Thesis, 1994. http://hdl.handle.net/10413/7347.

Full text
Abstract:
The research was directed toward the viability of an O(n) algorithm which could decompose an arbitrary signal (sound, vibration etc.) into its time-frequency space. The well known Fourier Transform uses sine and cosine functions (having infinite support on t) as orthonormal basis functions to decompose a signal i(t) in the time domain to F(w) in the frequency . domain, where the Fourier coefficients F(w) are the contributions of each frequency in the original signal. Due to the non-local support of these basis functions, a signal containing a sharp localised transient does not have localised coefficients, but rather coefficients that decay slowly. Another problem is that the coefficients F(w) do not convey any time information. The windowed Fourier Transform, or short-time Fourier Transform, does attempt to resolve the latter, but has had limited success. Wavelets are basis functions, usually mutually orthonormal, having finite support in t and are therefore spatially local. Using non-orthogonal wavelets, the Dominant Scale Transform (DST) designed by the author, decomposes a signal into its approximate time-frequency space. The associated Dominant Scale Algorithm (DSA) has O(n) complexity and is integer-based. These two characteristics make the DSA extremely efficient. The thesis also investigates the problem of converting a music signal into it's equivalent music score. The old problem of speech recognition is also examined. The results obtained from the DST are shown to be consistent with those of other authors who have utilised other methods. The resulting DST coefficients are shown to render the DST particularly useful in speech segmentation (silence regions, voiced speech regions, and frication). Moreover, the Spectrogram Dominant Scale Transform (SDST), formulated from the DST, was shown to approximate the Fourier coefficients over fixed time intervals within vowel regions of human speech.
Thesis (Ph.D.)-University of Natal, Durban, 1994.
APA, Harvard, Vancouver, ISO, and other styles
48

Bettadapura, Raghu Prasad Radhakrishna. "Flexible fitting in 3D EM." 2012. http://hdl.handle.net/2152/19478.

Full text
Abstract:
In flexible fitting, the high-resolution crystal structure of a molecule is deformed to optimize its position with respect to a low-resolution density map. Solving the flexible fitting problem entails answering the following questions: (A) How can the crystal structure be deformed? (B) How can the term "optimum" be defined? and (C) How can the optimization problem be solved? In this dissertation, we answer the above questions in reverse order. (C) We develop PFCorr, a non-uniform SO(3)-Fourier-based tool to efficiently conduct rigid-body correlations over arbitrary subsets of the space of rigid-body motions. (B) We develop PF2Fit, a rigid-body fitting tool that provides several useful definitions of the optimal fit between the crystal structure and the density map while using PFCorr to search over the space of rigid-body motions (A) We develop PF3Fit, a flexible fitting tool that deforms the crystal structure with a hierarchical domain-based flexibility model while using PF2Fit to optimize the fit with the density map. Our contributions help us solve the rigid-body and flexible fitting problems in unique and advantageous ways. They also allow us to develop a generalized framework that extends, breadth-wise, to other problems in computational structural biology, including rigid-body and flexible docking, and depth-wise, to the question of interpreting the motions inherent to the crystal structure. Publicly-available implementations of each of the above tools additionally provide a window into the technically diverse fields of applied mathematics, structural biology, and 3D image processing, fields that we attempt, in this dissertation, to span.
text
APA, Harvard, Vancouver, ISO, and other styles
49

Myronova, Mariia. "Applications of finite reflection groups in Fourier analysis and symmetry breaking of polytopes." Thesis, 2021. http://hdl.handle.net/1866/25600.

Full text
Abstract:
Cette thèse présente une étude des applications des groupes de réflexion finis aux problems liés aux réseaux bidimensionnels et aux polytopes tridimensionnels. Plusieurs familles de fonctions orbitales, appelées fonctions orbitales de Weyl, sont associées aux groupes de réflexion cristallographique. Les propriétés exceptionnelles de ces fonctions, telles que l’orthogonalité continue et discrète, permettent une analyse de type Fourier sur le domaine fondamental d’un groupe de Weyl affine correspondant. Dans cette considération, les fonctions d’orbite de Weyl constituent des outils efficaces pour les transformées discrètes de type Fourier correspondantes connues sous le nom de transformées de Fourier–Weyl. Cette recherche limite notre attention aux fonctions d’orbite de Weyl symétriques et antisymétriques à deux variables du groupe de réflexion cristallographique A2. L’objectif principal est de décomposer deux types de transformations de Fourier–Weyl du réseau de poids correspondant en transformées plus petites en utilisant la technique de division centrale. Pour les cas non cristallographiques, nous définissons les indices de degré pair et impair pour les orbites des groupes de réflexion non cristallographique avec une symétrie quintuple en utilisant un remplacement de représentation-orbite. De plus, nous formulons l’algorithme qui permet de déterminer les structures de polytopes imbriquées. Par ailleurs, compte tenu de la pertinence de la symétrie icosaédrique pour la description de diverses molécules sphériques et virus, nous étudions la brisure de symétrie des polytopes doubles de type non cristallographique et des structures tubulaires associées. De plus, nous appliquons une procédure de stellation à la famille des polytopes considérés. Puisque cette recherche se concentre en partie sur les fullerènes icosaédriques, nous présentons la construction des nanotubes de carbone correspondants. De plus, l’approche considérée pour les cas non cristallographiques est appliquée aux structures cristallographiques. Nous considérons un mécanisme de brisure de symétrie appliqué aux polytopes obtenus en utilisant les groupes Weyl tridimensionnels pour déterminer leurs extensions structurelles possibles en nanotubes.
This thesis presents a study of applications of finite reflection groups to the problems related to two-dimensional lattices and three-dimensional polytopes. Several families of orbit functions, known as Weyl orbit functions, are associated with the crystallographic reflection groups. The exceptional properties of these functions, such as continuous and discrete orthogonality, permit Fourier-like analysis on the fundamental domain of a corresponding affine Weyl group. In this consideration, Weyl orbit functions constitute efficient tools for corresponding Fourier-like discrete transforms known as Fourier–Weyl transforms. This research restricts our attention to the two-variable symmetric and antisymmetric Weyl orbit functions of the crystallographic reflection group A2. The main goal is to decompose two types of the corresponding weight lattice Fourier–Weyl transforms into smaller transforms using the central splitting technique. For the non-crystallographic cases, we define the even- and odd-degree indices for orbits of the non-crystallographic reflection groups with 5-fold symmetry by using a representation-orbit replacement. Besides, we formulate the algorithm that allows determining the structures of nested polytopes. Moreover, in light of the relevance of the icosahedral symmetry to the description of various spherical molecules and viruses, we study symmetry breaking of the dual polytopes of non-crystallographic type and related tube-like structures. As well, we apply a stellation procedure to the family of considered polytopes. Since this research partly focuses on the icosahedral fullerenes, we present the construction of the corresponding carbon nanotubes. Furthermore, the approach considered for the non-crystallographic cases is applied to crystallographic structures. We consider a symmetry-breaking mechanism applied to the polytopes obtained using the three-dimensional Weyl groups to determine their possible structural extensions into nanotubes.
APA, Harvard, Vancouver, ISO, and other styles
50

Fonseca, Aurineide Castro. "Function theory for fractional Dirac operators." Doctoral thesis, 2018. http://hdl.handle.net/10773/25081.

Full text
Abstract:
This thesis studies the basic tools of a fractional function theory in higher dimensions by means of a fractional correspondence to the Weyl relations via Gelfond-Leontiev operators of generalized di erentiation. A Fischer decomposition is established and we give an algorithm for the construction of monogenic homogeneous polynomials of arbitrary degree. We also describes the generalized fractional Cli ord analysis in the ternary setting and we will give a complete algebraic and analytic description of the spaces of monogenic functions in this sense, their analogous Fischer decomposition, concluding with a description of the basis of the space of fractional homogeneous monogenic polynomials that arise in this case and an explicit algorithm for the construction of this basis. This theory also includes the Cauchy-Kovalevskaya (CK) extension theorem in the generalized fractional setting by using the GL derivative operators and it is presented the reproducing kernel Hilbert modules that arise from the monogenic formal powers constructed via the CK theorem. In addition, we give the Fourier-like transforms linked to fractional derivative operators with respect to Gelfond-Leontiev operators of generalized di erentiation and its properties.
Nesta tese são estudadas algumas das ferramentas básicas da teoria de funçõoes fracionárias em dimensões superiores, por meio de uma correspondência fracionada com as relações de Weyl através de operadores de diferenciação generalizada de Gelfond-Leontiev (GL). Estabelecemos uma decomposição de Fischer e apresentamos um algoritmo para a construção de polinómios monogénicos homogéneos fracionários de grau arbitrário. Também descrevemos a análise generalizada de Clifford fracionada na configuração ternária e daremos uma descrição completa, algébrica e analítica, dos espaços de funções monogénicas neste sentido, sua decomposição Fischer análoga e concluindo com uma descrição da base para os espaços de polinómios homogéneos monogénicos fracionários que surgem neste caso e um algoritmo explícito para a construção desta base. Esta teoria também inclui o teorema de extensão de Cauchy-Kovalevskaya (CK) na configuração fracionária generalizada usando os operadores de derivação GL e são apresentados os módulos de Hilbert com núcleo reprodutivo que surgem a partir das potências monogénicas construídas através do teorema CK. Além disso, damos as transformações semelhantes a Fourier ligadas a operadores de derivados fracionados em relação aos operadores GL de diferenciação generalizada e suas propriedades.
Programa Doutoral em Matemática e Aplicações
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography