To see the other types of publications on this topic, follow the link: Gaussian correlation.

Dissertations / Theses on the topic 'Gaussian correlation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Gaussian correlation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hamdi, Walaa Ahmed. "Local Distance Correlation: An Extension of Local Gaussian Correlation." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1589239468129597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ofosuhene, Patrick. "The energy goodness-of-fit test for the inverse Gaussian distribution." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1605273458669328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mackenzie, Mark. "Correlation with the hermite series using artificial neural network technology." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050202.122218/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roininen, L. (Lassi). "Discretisation-invariant and computationally efficient correlation priors for Bayesian inversion." Doctoral thesis, University of Oulu, 2015. http://urn.fi/urn:isbn:9789526207544.

Full text
Abstract:
Abstract We are interested in studying Gaussian Markov random fields as correlation priors for Bayesian inversion. We construct the correlation priors to be discretisation-invariant, which means, loosely speaking, that the discrete priors converge to continuous priors at the discretisation limit. We construct the priors with stochastic partial differential equations, which guarantees computational efficiency via sparse matrix approximations. The stationary correlation priors have a clear statistical interpretation through the autocorrelation function. We also consider how to make structural model of an unknown object with anisotropic and inhomogeneous Gaussian Markov random fields. Finally we consider these fields on unstructured meshes, which are needed on complex domains. The publications in this thesis contain fundamental mathematical and computational results of correlation priors. We have considered one application in this thesis, the electrical impedance tomography. These fundamental results and application provide a platform for engineers and researchers to use correlation priors in other inverse problem applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Prascher, Brian P. Wilson Angela K. "Systematic approaches to predictive computational chemistry using the correlation consistent basis sets." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-9920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Angelchev, Shiryaev Artem, and Johan Karlsson. "Estimating Dependence Structures with Gaussian Graphical Models : A Simulation Study in R." Thesis, Umeå universitet, Statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184925.

Full text
Abstract:
Graphical models are powerful tools when estimating complex dependence structures among large sets of data. This thesis restricts the scope to undirected Gaussian graphical models. An initial predefined sparse precision matrix was specified to generate multivariate normally distributed data. Utilizing the generated data, a simulation study was conducted reviewing accuracy, sensitivity and specificity of the estimated precision matrix. The graphical LASSO was applied using four different packages available in R with seven selection criteria's for estimating the tuning parameter. The findings are mostly in line with previous research. The graphical LASSO is generally faster and feasible in high dimensions, in contrast to stepwise model selection. A portion of the selection methods for estimating the optimal tuning parameter obtained the true network structure. The results provide an estimate of how well each model obtains the true, predefined dependence structure as featured in our simulation. As the simulated data used in this thesis is merely an approximation of real-world data, one should not take the results as the only aspect of consideration when choosing a model.
APA, Harvard, Vancouver, ISO, and other styles
7

Tamburello, Philip Michael. "Iterative Memoryless Non-linear Estimators of Correlation for Complex-Valued Gaussian Processes that Exhibit Robustness to Impulsive Noise." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/64785.

Full text
Abstract:
The autocorrelation function is a commonly used tool in statistical time series analysis. Under the assumption of Gaussianity, the sample autocorrelation function is the standard method used to estimate this function given a finite number of observations. Non-Gaussian, impulsive observation noise following probability density functions with thick tails, which often occurs in practice, can bias this estimator, rendering classical time series analysis methods ineffective. This work examines the robustness of two estimators of correlation based on memoryless nonlinear functions of observations, the Phase-Phase Correlator (PPC) and the Median- of-Ratios Estimator (MRE), which are applicable to complex-valued Gaussian random pro- cesses. These estimators are very fast and easy to implement in current processors. We show that these estimators are robust from a bias perspective when complex-valued Gaussian pro- cesses are contaminated with impulsive noise at the expense of statistical efficiency at the assumed Gaussian distribution. Additionally, iterative versions of these estimators named the IMRE and IPPC are developed, realizing an improved bias performance over their non- iterative counterparts and the well-known robust Schweppe-type Generalized M-estimator utilizing a Huber cost function (SHGM). An impulsive noise suppression technique is developed using basis pursuit and a priori atom weighting derived from the newly developed iterative estimators. This new technique is proposed as an alternative to the robust filter cleaner, a Kalman filter-like approach that relies on linear prediction residuals to identity and replace corrupted observations. It does not have the same initialization issues as the robust filter cleaner. Robust spectral estimation methods are developed using these new estimators and impulsive noise suppression techniques. Results are obtained for synthetic complex-valued Guassian processes and real-world digital television signals collected using a software defined radio.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Yockel, Scott. "The evaluation, development, and application of the correlation consistent basis sets." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5484/.

Full text
Abstract:
Employing correlation consistent basis sets coupled with electronic structure methods has enabled accurate predictions of chemical properties for second- and third-row main group and transition metal molecular species. For third-row (Ga-Kr) molecules, the performance of the correlation consistent basis sets (cc-pVnZ, n=D, T, Q, 5) for computing energetic (e.g., atomization energies, ionization energies, electron and proton affinities) and structural properties using the ab initio coupled cluster method including single, double, and quasiperturbative triple excitations [CCSD(T)] and the B3LYP density functional method was examined. The impact of relativistic corrections on these molecular properties was determined utilizing the Douglas-Kroll (cc-pVnZ-DK) and pseudopotential (cc-pVnZ-PP) forms of the correlation consistent basis sets. This work was extended to the characterization of molecular properties of novel chemically bonded krypton species, including HKrCl, FKrCF3, FKrSiF3, FKrGeF3, FKrCCF, and FKrCCKrF, and provided the first evidence of krypton bonding to germanium and the first di-krypton system. For second-row (Al-Ar) species, the construction of the core-valence correlation consistent basis sets, cc-pCVnZ was reexamined, and a revised series, cc-pCV(n+d)Z, was developed as a complement to the augmented tight-d valence series, cc-pV(n+d)Z. Benchmark calculations were performed to show the utility of these new sets for second-row species. Finally, the correlation consistent basis sets were used to study the structural and spectroscopic properties of Au(CO)Cl, providing conclusive evidence that luminescence in the solid-state can be attributed to oligomeric species rather than to the monomer.
APA, Harvard, Vancouver, ISO, and other styles
9

Mukhopadhyay, Amit Kumar. "Statistics for motion of microparticles in a plasma." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1369.

Full text
Abstract:
I report experimental and numerical studies of microparticle motion in a dusty plasma. These microparticles are negatively charged and are levitated in a plasma consisting of electrons, ions and neutral gas atoms. The microparticles repel each other, and are confined by the electric fields in the plasma. The neutral gas damps the microparticle motion, and also exerts random forces on them. I investigate and characterize microparticle motion. In order to do this, I study velocity distributions of microparticles and correlations of their motion. To perform such a study, I develop new experimental and analysis techniques. My thesis consists of four separate projects. In the first project, the battle between deterministic and random motion of microparticles is investigated. Two particle velocity distributions and correlations have previously studied only in theory. I performed an experiment with a very simple one dimensional (1D) system of two microparticles in a plasma. My study of velocity correlations involves just two microparticles which is the simplest system that allows interactions. A study of such a simple system provides insight into the motions of the microparticles. It allowed for the experimental measurement of two-particle distributions and correlations. For such a system, it is shown that the motion of the microparticles is dominated by deterministic or oscillatory effects. In the second project, two experiments with just two microparticles are performed to isolate the effects of ion wakes. The two experiments differ in the alignment of the two microparticles: they are aligned either perpendicular or parallel to the ion flow. To have different alignments, the sheath is shaped differently in the two experiments. I demonstrate that microparticle motion is more correlated when they are aligned along the ion flow, rather than perpendicular to the ion flow. In the third project, I develop a model with some key assumptions to compare with the experiments in the first two projects. My model includes all significant forces: gravity, electrical forces due to curved sheath and interparticle interaction, and gas forces. The model does not agree with both the experiments. In the last project, I study the non-Gaussian statistics by analyzing data for microparticle motion from an experiment performed under microgranity conditions. Microparticle motion is studied in a very thin region of microparticles in a three dimensional dust cloud. The microparticle velocity distributions exhibit non-Gaussian characteristics.
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Gang. "Modeling the output from computer experiments having quantitative and qualitative input variables and its applications." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1228326460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Prascher, Brian P. "Systematic Approaches to Predictive Computational Chemistry using the Correlation Consistent Basis Sets." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc9920/.

Full text
Abstract:
The development of the correlation consistent basis sets, cc-pVnZ (where n = D, T, Q, etc.) have allowed for the systematic elucidation of the intrinsic accuracy of ab initio quantum chemical methods. In density functional theory (DFT), where the cc-pVnZ basis sets are not necessarily optimal in their current form, the elucidation of the intrinsic accuracy of DFT methods cannot always be accomplished. This dissertation outlines investigations into the basis set requirements for DFT and how the intrinsic accuracy of DFT methods may be determined with a prescription involving recontraction of the cc-pVnZ basis sets for specific density functionals. Next, the development and benchmarks of a set of cc-pVnZ basis sets designed for the s-block atoms lithium, beryllium, sodium, and magnesium are presented. Computed atomic and molecular properties agree well with reliable experimental data, demonstrating the accuracy of these new s-block basis sets. In addition to the development of cc-pVnZ basis sets, the development of a new, efficient formulism of the correlation consistent Composite Approach (ccCA) using the resolution of the identity (RI) approximation is employed. The new formulism, denoted 'RI-ccCA,' has marked efficiency in terms of computational time and storage, compared with the ccCA formulism, without the introduction of significant error. Finally, this dissertation reports three separate investigations of the properties of FOOF-like, germanium arsenide, and silicon hydride/halide molecules using high accuracy ab initio methods and the cc-pVnZ basis sets.
APA, Harvard, Vancouver, ISO, and other styles
12

Paler, Mary Elvi Aspiras. "On Modern Measures and Tests of Multivariate Independence." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1447628176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bolfarine, Henrique. "Comparative evaluation of network reconstruction methods in high dimensional settings." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-18052017-001105/.

Full text
Abstract:
In the past years, several network reconstruction methods modeled as Gaussian Graphical Model in high dimensional settings where proposed. In this work we will analyze three different methods, the Graphical Lasso (GLasso), Graphical Ridge (GGMridge) and a novel method called LPC, or Local Partial Correlation. The evaluation will be performed in high dimensional data generated from different simulated random graph structures (Erdos-Renyi, Barabasi-Albert, Watts-Strogatz ), using Receiver Operating Characteristic or ROC curve. We will also apply the methods in the reconstruction of genetic co-expression network for the differentially expressed genes in cervical cancer tumors.
Vários métodos tem sido propostos para a reconstrução de redes em alta dimensão, que e tratada como um Modelo Gráfico Gaussiano. Neste trabalho vamos analisar três métodos diferentes, o método Graphical Lasso (GLasso), Graphical Ridge (GGMridge) e um novo método chamado LPC, ou Correlação Parcial Local. A avaliação será realizada em dados de alta dimensão, gerados a partir de grafos aleatórios (Erdos-Renyi, Barabasi-Albert, Watts-Strogatz ), usando Receptor de Operação Característica, ou curva ROC. Aplicaremos também os metidos apresentados, na reconstrução da rede de co-expressão gênica para tumores de câncer cervical.
APA, Harvard, Vancouver, ISO, and other styles
14

LEGGIO, Bruno. "Quantum fluctuations and correlations in equilibrium and nonequilibrium thermodynamics." Doctoral thesis, Università degli Studi di Palermo, 2014. http://hdl.handle.net/10447/90914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Stynsberg, John. "Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153110.

Full text
Abstract:
Visual tracking is a computer vision problem where the task is to follow a targetthrough a video sequence. Tracking has many important real-world applications in several fields such as autonomous vehicles and robot-vision. Since visual tracking does not assume any prior knowledge about the target, it faces different challenges such occlusion, appearance change, background clutter and scale change. In this thesis we try to improve the capabilities of tracking frameworks using discriminative correlation filters by incorporating scene depth information. We utilize scene depth information on three main levels. First, we use raw depth information to segment the target from its surroundings enabling occlusion detection and scale estimation. Second, we investigate different visual features calculated from depth data to decide which features are good at encoding geometric information available solely in depth data. Third, we investigate handling missing data in the depth maps using a modified version of the normalized convolution framework. Finally, we introduce a novel approach for parameter search using genetic algorithms to find the best hyperparameters for our tracking framework. Experiments show that depth data can be used to estimate scale changes and handle occlusions. In addition, visual features calculated from depth are more representative if they were combined with color features. It is also shown that utilizing normalized convolution improves the overall performance in some cases. Lastly, the usage of genetic algorithms for hyperparameter search leads to accuracy gains as well as some insights on the performance of different components within the framework.
APA, Harvard, Vancouver, ISO, and other styles
16

Chrzanowski, Helen Mary. "Extracting quantum correlations from gaussian states." Phd thesis, Canberra, ACT : The Australian National University, 2014. http://hdl.handle.net/1885/14511.

Full text
Abstract:
The Gaussian toolbox of the continuous variables provides for deterministic, high-efficiency operations with non-classical states. Its very Gaussian nature, however, restricts its reach for quantum information and communication applications. This thesis comprises three experimental works, which seek to examine the strengths of this toolbox and address some of its weaknesses. The measurement-based non-linearity of a conditional photon-counting measurement can be used to de-Gaussify' a Gaussian state of light. Here, we propose a continuous variable analog of just such a 'heralding' measurement, replacing a non-deterministic photon-counting measurement with a deterministic measurement of the field quadratures. Such a technique cannot be used to prepare a non-Gaussian state, but it can, on average, yield the same non-Gaussian statistics. We demonstrate this technique by reconstructing the statistics of non-Gaussian photon-subtracted squeezed vacuum states. We then consider the problem of noiseless linear amplification. We experimentally demonstrate that in certain scenarios, the requirement for a physical noiseless linear amplifier can be exchanged for a straightforward post-selection of the measurement record. We apply our 'virtual' noiseless amplifier to entanglement degraded by transmission loss of up to the equivalent of 100km of optical fibre. We extract an effective entangled resource stronger than even that achievable with a maximally entangled resource passively transmitted through the same channel. We also provide a proof-of-principle demonstration of the value of the measurement-based noiseless linear amplifier for quantum key distribution, extracting a secret key from an otherwise insecure regime. Lastly, we turn to the recently popularised measure of all quantum correlations: quantum discord. Quantum discord has emerged as a measure of quantum correlations beyond entanglement, with significant ramifications for our understanding of Gaussian states. Here, we introduce a simple protocol that yields an operational interpretation of quantum discord: that discord describes information only accessible via coherent interactions. We first experimentally encode information within the discordant correlations of two separable Gaussian states. The amount of extra information recovered by coherent interactions is directly linked to the discord of the original state.
APA, Harvard, Vancouver, ISO, and other styles
17

Hanafi, Effariza binti. "Quickest spectrum sensing with multiple antennas: performance analysis in various fading channels." Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9642.

Full text
Abstract:
Traditional wireless networks are regulated by a fixed spectrum assignment policy. This results in situations where most of the allocated radio spectrum is not utilized. In order to address this spectrum underutilization, cognitive radio (CR) has emerged as a promising solution. Spectrum sensing is an essential component in CR networks to discover spectrum opportunities. The most common spectrum sensing techniques are energy detection, matched filtering or cyclostationary feature detection, which aim to maximize the probability of detection subject to a certain false alarm rate. Besides probability of detection, detection delay is also a crucial criterion in spectrum sensing. In an interweave CR network, quick detection of the absence of primary user (PU), which is the owner of the licensed spectrum, allows good utilization of unused spectrum, while quick detection of PU transmission is important to avoid any harmful interference. This thesis consider quickest spectrum sensing, where the aim is to detect the PU with minimal detection delay subject to a certain false alarm rate. In the earlier chapters of this thesis, a single antenna cognitive user (CU) is considered and we study quickest spectrum sensing performance in Gaussian channel and classical fading channel models, including Rayleigh, Rician, Nakagami-m and a long-tailed channel. We prove that the power of the complex received signal is a sufficient statistic and derive the probability density function (pdf) of the received signal amplitude for all of the fading cases. The novel derivation of the pdfs of the amplitude of the received signal for the Rayleigh, Rician and Nakagami-m channels uses an approach which avoids numerical integration. We also consider the event of a mis-matched channel, where the cumulative sum (CUSUM) detector is designed for a specific channel, but a different channel is experienced. This scenario could occur in CR network as the channel may not be known and hence the CUSUM detector may be experiencing a different channel. Simulations results illustrate that the average detection delay depends greatly on the channel but very little on the nature of the detector. Hence, the simplest time-invariant detector can be employed with minimal performance loss. Theoretical expressions for the distribution of detection delay for the time-invariant CUSUM detector, with single antenna CU are developed. These are useful for a more detailed analysis of the quickest spectrum sensing performance. We present several techniques to approximate the distribution of detection delay, including deriving a novel closed-form expression for the detection delay distribution when the received signal experiences a Gaussian channel. We also derive novel approximations for the distribution of detection delay for the general case due to the absence of a general framework. Most of the techniques are general and can be applied to any independent and identically distributed (i.i.d) channel. Results show that different signal-to-noise ratio (SNR) and detection delay conditions require different methods in order to achieve good approximations of the detection delay distributions. The remarkably simple Brownian motion approach gives the best approximation for longer detection delays. In addition, results show that the type of fading channel has very little impact on long detection delays. In later chapters of this thesis, we employ multiple receive antennas at the CU. In particular, we study the performance of multi-antenna quickest spectrum sensing when the received signal experiences Gaussian, independent and correlated Rayleigh and Rician channels. The pdfs of the received signals required to form the CUSUM detector are derived for each of the scenarios. The extension into multiple antennas allows us to gain some insight into the reduction in detection delay that multiple antennas can provide. Results show that the sensing performance increases with an increasing Rician K-factor. In addition, channel correlation has little impact on the sensing performance at high SNR, whereas at low SNR, increasing correlation between channels improves the quickest spectrum sensing performance. We also consider mis-matched channel conditions and show that the quickest spectrum sensing performance at a particular correlation coefficient or Rician K-factor depends heavily on the true channel irrespective of the number of antennas at the CU and is relatively insensitive to the channel used to design the CUSUM detector. Hence, a simple multi-antenna time-invariant detector can be employed. Based on the results obtained in the earlier chapters, we derive theoretical expressions for the detection delay distribution when multiple receive antennas are employed at the CU. In particular, the approximation of the detection delay distribution is based on the Brownian motion approach.
APA, Harvard, Vancouver, ISO, and other styles
18

Akhtar, Mahmood Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Genomic sequence processing: gene finding in eukaryotes." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2008. http://handle.unsw.edu.au/1959.4/40912.

Full text
Abstract:
Of the many existing eukaryotic gene finding software programs, none are able to guarantee accurate identification of genomic protein coding regions and other biological signals central to pathway from DNA to the protein. Eukaryotic gene finding is difficult mainly due to noncontiguous and non-continuous nature of genes. Existing approaches are heavily dependent on the compositional statistics of the sequences they learn from and are not equally suitable for all types of sequences. This thesis firstly develops efficient digital signal processing-based methods for the identification of genomic protein coding regions, and then combines the optimum signal processing-based non-data-driven technique with an existing data-driven statistical method in a novel system demonstrating improved identification of acceptor splice sites. Most existing well-known DNA symbolic-to-numeric representations map the DNA information into three or four numerical sequences, potentially increasing the computational requirement of the sequence analyzer. Proposed mapping schemes, to be used for signal processing-based gene and exon prediction, incorporate DNA structural properties in the representation, in addition to reducing complexity in subsequent processing. A detailed comparison of all DNA representations, in terms of computational complexity and relative accuracy for the gene and exon prediction problem, reveals the newly proposed ?paired numeric? to be the best DNA representation. Existing signal processing-based techniques rely mostly on the period-3 behaviour of exons to obtain one dimensional gene and exon prediction features, and are not well equipped to capture the complementary properties of exonic / intronic regions and deal with the background noise in detection of exons at their nucleotide levels. These issues have been addressed in this thesis, by proposing six one-dimensional and three multi-dimensional signal processing-based gene and exon prediction features. All one-dimensional and multi-dimensional features have been evaluated using standard datasets such as Burset/Guigo1996, HMR195, and the GENSCAN test set. This is the first time that different gene and exon prediction features have been compared using substantial databases and using nucleotide-level metrics. Furthermore, the first investigation of the suitability of different window sizes for period-3 exon detection is performed. Finally, the optimum signal processing-based gene and exon prediction scheme from our evaluations is combined with a data-driven statistical technique for the recognition of acceptor splice sites. The proposed DSP-statistical hybrid is shown to achieve 43% reduction in false positives over WWAM, as used in GENSCAN.
APA, Harvard, Vancouver, ISO, and other styles
19

Stabingiene, Lijana. "Image analysis using Bayes discriminant functions." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120917_092640-83758.

Full text
Abstract:
Image analysis is very important because of its usage in many different areas of science and industry. Pattern recognition (classification) is a tool used in image analysis. Statistical pattern recognition, based on Bayes discriminant functions is the object of this work. The main problem is to classify stationary Gaussian random field observation into one off two classes, considering, that it is dependant on training sample ant taking in to account the relationship with training sample. The new supervised classification method, based on Bayes discriminant functions, is proposed and it gives better results comparing with other commonly used Bayes discriminant functions. Method is programmed with R program and investigated experimentally, reconstructing images corrupted by spatially correlated noise. Such situation occurs naturally, for example, during the forest fire smoke covers the remotely sensed image, gathered from the satellite. Also such situation is often during cloudy days. During such situation the incorporation of the spatial dependences into the classification problem is useful. Analytical error rates of Bayes discriminant functions are presented (derived), which are the criterion of these functions. Also, the dependences on statistical parameters are investigated for these error rates.
Vaizdų analizė šiomis dienomis yra labai svarbi dėl plataus pritaikymo daugelyje mokslo ir pramonės sričių. Vienas iš vaizdų analizės įrankių – objekto atpažinimas (klasifikavimas) (angl. pattern recognition). Statistinis objekto atpažinimas, paremtas Bajeso diskriminantinėmis funkcijomis – šio darbo objektas. Sprendžiama problema – optimalus klasifikavimas stacionaraus Gauso atsitiktinio lauko (GRF) stebinio, į vieną iš dviejų klasių, laikant, kad jis yra priklausomas nuo mokymo imties ir atsižvelgiant į jo ryšius su mokymo imtimi. Pateikta klasifikavimo procedūra, kuri Gauso atsitiktinio lauko stebinius klasifikuoja optimaliai. Yra pasiūlytas naujas klasifikavimo su mokymu metodas, kuris duoda geresnius rezultatus, lyginant su įprastai naudojamomis Bajeso diskriminantinėmis funkcijomis. Metodas realizuotas R sistemos aplinkoje ir tikrinamas eksperimentų būdu, atstatant vaizdus, sugadintus erdvėje koreliuoto triukšmo. Tokia situacija pasitaiko natūraliai, pavyzdžiui, degant miškui dūmai uždengia nuotolinio stebėjimo vaizdą, gautą iš palydovo. Taip pat tokia situacija gana dažna esant debesuotumui. Esant tokiai situacijai erdvinės priklausomybės įvedimas į klasifikacijos problemą pasiteisina. Pateiktos (išvestos) analitinės klaidų tikimybių išraiškos Bajeso diskriminantinėms funkcijoms, kurios yra kaip šių funkcijų veikimo kriterijus. Ištirta klaidų tikimybių priklausomybė nuo statistinių parametrų reikšmių.
APA, Harvard, Vancouver, ISO, and other styles
20

Ponzer, Martin. "Detekce a rozpoznávání obličeje." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217773.

Full text
Abstract:
This paper discusses problems of computer vision, which deals with face detection and recognition in image and video sequence at real time. All methods are designed for color images and are based on skin detection on the basis of information of human skin color. For skin detection is used very effective method Gaussian distribution. All of the areas, which have human skin color, are classified. This classification specifies, which area is or isn’t face. For face detection is used correlation method, complete with eigenfaces method. All areas classified as a face are subsequently recognized by the eigenfaces method. Result of recognition phase is information about human identity.
APA, Harvard, Vancouver, ISO, and other styles
21

Voloshina, Elena, Denis Usvyat, Martin Schütz, Yuriy Dedkov, and Beate Paulus. "On the physisorption of water on graphene: a CCSD(T) study." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-138776.

Full text
Abstract:
The electronic structure of the zero-gap two-dimensional graphene has a charge neutrality point exactly at the Fermi level that limits the practical application of this material. There are several ways to modify the Fermi-level-region of graphene, e.g. adsorption of graphene on different substrates or different molecules on its surface. In all cases the so-called dispersion or van der Waals interactions can play a crucial role in the mechanism, which describes the modification of electronic structure of graphene. The adsorption of water on graphene is not very accurately reproduced in the standard density functional theory (DFT) calculations and highly-accurate quantum-chemical treatments are required. A possibility to apply wavefunction-based methods to extended systems is the use of local correlation schemes. The adsorption energies obtained in the present work by means of CCSD(T) are much higher in magnitude than the values calculated with standard DFT functional although they agree that physisorption is observed. The obtained results are compared with the values available in the literature for binding of water on the graphene-like substrates
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich
APA, Harvard, Vancouver, ISO, and other styles
22

Rowley, Alexander. "Signal processing methods for cerebral autoregulation." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:3d85ab53-9c9b-4b50-98f2-2e67848e5da4.

Full text
Abstract:
Cerebral autoregulation describes the clinically observed phenomenon that cerebral blood flow remains relatively constant in healthy human subjects despite large systemic changes in blood pressure, dissolved blood gas concentrations, heart rate and other systemic variables. Cerebral autoregulation is known to be impaired post ischaemic stroke, after severe head injury, in patients suffering from autonomic dysfunction and under the action of various drugs. Cerebral auto-regulation is a dynamic, multivariate phenomenon. Sensitive techniques are required to monitor cerebral auto-regulation in a clinical setting. This thesis presents 4 related signal processing studies of cerebral autoregulation. The first study shows how consideration of changes in blood gas concentrations simultaneously with changes in blood pressure can improve the accuracy of an existing frequency domain technique for monitoring cerebral autoregulation from spontaneous fluctuations in blood pressure and a transcranial doppler measure of cerebral blood flow velocity. The second study shows how the continuous wavelet transform can be used to investigate coupling between blood pressure and near infrared spectroscopy measures of cerebral haemodynamics in patients with autonomic failure. This introduces time information into the frequency based assessment, however neglects the contribution of blood gas concentrations. The third study shows how this limitation can be resolved by introducing a new time-varying multivariate system identification algorithm based around the dual tree undecimated wavelet transform. All frequency and time-frequency domain methods of monitoring cerebral autoregulation assume linear coupling between the variables under consideration. The fourth study therefore considers nonlinear techniques of monitoring cerebral autoregulation, and illustrates some of the difficulties inherent in this form of analysis. The general approach taken in this thesis is to formulate a simple system model; usually in the form of an ODE or a stochastic process. The form of the model is adapted to encapsulate a hypothesis about features of cerebral autoregulation, particularly those features that may be difficult to recover using existing methods of analysis. The performance of the proposed method of analysis is then evaluated under these conditions. After this testing, the techniques are then applied to data provided by the Laboratory of Human Cerebrovascular Physiology in Alberta, Canada, and the National Hospital for Neurology and Neurosurgery in London, UK.
APA, Harvard, Vancouver, ISO, and other styles
23

Voloshina, Elena, Denis Usvyat, Martin Schütz, Yuriy Dedkov, and Beate Paulus. "On the physisorption of water on graphene: a CCSD(T) study." Royal Society of Chemistry, 2011. https://tud.qucosa.de/id/qucosa%3A27779.

Full text
Abstract:
The electronic structure of the zero-gap two-dimensional graphene has a charge neutrality point exactly at the Fermi level that limits the practical application of this material. There are several ways to modify the Fermi-level-region of graphene, e.g. adsorption of graphene on different substrates or different molecules on its surface. In all cases the so-called dispersion or van der Waals interactions can play a crucial role in the mechanism, which describes the modification of electronic structure of graphene. The adsorption of water on graphene is not very accurately reproduced in the standard density functional theory (DFT) calculations and highly-accurate quantum-chemical treatments are required. A possibility to apply wavefunction-based methods to extended systems is the use of local correlation schemes. The adsorption energies obtained in the present work by means of CCSD(T) are much higher in magnitude than the values calculated with standard DFT functional although they agree that physisorption is observed. The obtained results are compared with the values available in the literature for binding of water on the graphene-like substrates.
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Yulei. "Computer Experiments with Both Quantitative and Qualitative Inputs." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408042133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

De, Kock Michiel Burger. "Gaussian and non-Gaussian-based Gram-Charlier and Edgeworth expansions for correlations of identical particles in HBT interferometry." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/3124.

Full text
Abstract:
Thesis (MSc (Physics))--University of Stellenbosch, 2009.
Hanbury Brown-Twiss interferometry is a correlation technique by which the size and shape of the emission function of identical particles created during collisions of high-energy leptons, hadrons or nuclei can be determined. Accurate experimental datasets of three-dimensional correlation functions in momentum space now exist; these are sometimes almost Gaussian in form, but may also show strong deviations from Gaussian shapes. We investigate the suitability of expressing these correlation functions in terms of statistical quantities beyond the normal Gaussian description. Beyond means and the covariance matrix, higher-order moments and cumulants describe the form and di erence between the measured correlation function and a Gaussian distribution. The corresponding series expansion is the Gram- Charlier series and in particular the Gram-Charlier Type A expansion found in the literature, which is based on a Gaussian reference distribution. We investigate both the Gram-Charlier Type A series as well as generalised forms based on non-Gaussian reference distributions, as well as the related Edgeworth expansion. For testing purposes, experimental data is initially represented by a suite of one-dimensional analytic non-Gaussian distributions. We conclude that the accuracy of these expansions can be improved dramatically through a better choice of reference distribution, suggested by the sign and size of the kurtosis of the experimental distribution. We further extend our investigation to simulated samples of such test distributions and simplify the theoretical expressions for unbiased estimators (k-statistics) for the case of symmetric distributions.
APA, Harvard, Vancouver, ISO, and other styles
26

Quinn, Niall. "Gaussian non-classical correlations in bipartite dissipative continuous variable quantum systems." Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/6915.

Full text
Abstract:
This thesis probes the usefulness of non-classical correlations within imperfect continuous variable decoherent quantum systems. Although a consistent function and practical usefulness of these correlations is largely unknown, it is important to examine their characteristics in more realistic dissipative systems, to gain further insight into any possible advantageous behaviour. A bipartite separable discordant state under the action of controlled loss on one subsystem was considered. Under these conditions the Gaussian quantum discord not only proved to be robust against loss, but actually improves as loss is intensified. Harmful imperfections which reduce the achievable level of discord can be counteracted by this controlled loss. Through a purification an explanation of this effect was sought by considering system-environment correlations, and found that a flow of system-environment correlations increases the quantumness of the state. Entanglement recovery possibilities were discussed and revealed the importance of hidden quantum correlations along bi-partitions across the discordant state and a classically prepared "demodulating" system, acting in such a way as to partially cancel the entanglement preventing noise. Entanglement distribution by separable states was studied by a similar framework, in an attempt to explain the emergence of quantum entanglement by a specific flow of correlations in the globally pure system. Discord appears to play a less fundamental role compared to the qubit version of the protocol. The strengthening of non-classical correlations can be attributed to a flow of classical and quantum correlations. This work proves that discord can be created in unique ways and, in select circumstances, can act to counteract harmful imperfections in the apparatus. Due to this advantageous behaviour discord indeed may ultimately aid in more applicable "real world" applications, which are by definition decoherent.
APA, Harvard, Vancouver, ISO, and other styles
27

BARDET, JEAN MARC. "Tests d'autosimilarite des processus gaussiens. Dimension fractale et dimension de correlation." Paris 11, 1997. http://www.theses.fr/1997PA112351.

Full text
Abstract:
Cette these se compose de quatre parties ayant pour denominateur commun la notion de fractalite. La premiere partie est une presentation des systemes dynamiques chaotiques sous la forme d'une revue bibliographique. On s'interesse aux differentes dimensions d'un attracteur et aux exposants de lyapunov. On recense les differents traitements statistiques utilises pour estimer ces donnees, notamment les notions d'exposants de lyapunov locaux et de dimension de correlation. La seconde partie definit et etudie une dimension de correlation dite locale pour les processus stochastiques continus. Cette dimension se determine a partir du comportement asymptotique de la mesure d'occupation d'auto-intersection. On compare cette dimension avec la dimension de hausdorff, et on determine ces deux dimensions dans le cas des champs gaussiens a index. La troisieme partie presente un test d'autosimilarite pour une serie d'observations provenant d'un processus gaussien a accroissements stationnaires. Ce test repose sur l'estimation d'une distance entre le processus considere et un ensemble de processus contenant tous les mouvements browniens fractionnaires. Cette distance est construite a partir d'une double estimation, pour une gamme d'echelles temporelles, de la variance des accroissements du processus. La seconde de ces estimations fait appel a une double estimation du parametre d'autosimilarite par des methodes de regression. Ces estimateurs presentent une vitesse de calcul avantageuse et une grande robustesse en comparaison de l'estimation par maximum de vraisemblance. La quatrieme partie considere un second test d'autosimilarite construit sur le meme principe, mais utilisant l'analyse par ondelettes. La statistique de test est alors une distance entre deux estimations differentes de la variance des coefficients d'ondelettes pour certaines echelles. Deux estimateurs robustes et simples du parametre d'autosimilarite sont encore introduits.
APA, Harvard, Vancouver, ISO, and other styles
28

Lartigue, Thomas. "Mixtures of Gaussian Graphical Models with Constraints Gaussian Graphical Model exploration and selection in high dimension low sample size setting." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX034.

Full text
Abstract:
La description des co-variations entre plusieurs variables aléatoires observées est un problème délicat. Les réseaux de dépendance sont des outils populaires qui décrivent les relations entre les variables par la présence ou l’absence d’arêtes entre les nœuds d’un graphe. En particulier, les graphes de corrélations conditionnelles sont utilisés pour représenter les corrélations “directes” entre les nœuds du graphe. Ils sont souvent étudiés sous l’hypothèse gaussienne et sont donc appelés “modèles graphiques gaussiens” (GGM). Un seul réseau peut être utilisé pour représenter les tendances globales identifiées dans un échantillon de données. Toutefois, lorsque les données observées sont échantillonnées à partir d’une population hétérogène, il existe alors différentes sous-populations qui doivent toutes être décrites par leurs propres graphes. De plus, si les labels des sous populations (ou “classes”) ne sont pas disponibles, des approches non supervisées doivent être mises en œuvre afin d’identifier correctement les classes et de décrire chacune d’entre elles avec son propre graphe. Dans ce travail, nous abordons le problème relativement nouveau de l’estimation hiérarchique des GGM pour des populations hétérogènes non labellisées. Nous explorons plusieurs axes clés pour améliorer l’estimation des paramètres du modèle ainsi que l’identification non supervisee des sous-populations. ´ Notre objectif est de s’assurer que les graphes de corrélations conditionnelles inférés sont aussi pertinents et interprétables que possible. Premièrement - dans le cas d’une population simple et homogène - nous développons une méthode composite qui combine les forces des deux principaux paradigmes de l’état de l’art afin d’en corriger les faiblesses. Pour le cas hétérogène non labellisé, nous proposons d’estimer un mélange de GGM avec un algorithme espérance-maximisation (EM). Afin d’améliorer les solutions de cet algorithme EM, et d’éviter de tomber dans des extrema locaux sous-optimaux quand les données sont en grande dimension, nous introduisons une version tempérée de cet algorithme EM, que nous étudions théoriquement et empiriquement. Enfin, nous améliorons le clustering de l’EM en prenant en compte l’effet que des cofacteurs externes peuvent avoir sur la position des données observées dans leur espace
Describing the co-variations between several observed random variables is a delicate problem. Dependency networks are popular tools that depict the relations between variables through the presence or absence of edges between the nodes of a graph. In particular, conditional correlation graphs are used to represent the “direct” correlations between nodes of the graph. They are often studied under the Gaussian assumption and consequently referred to as “Gaussian Graphical Models” (GGM). A single network can be used to represent the overall tendencies identified within a data sample. However, when the observed data is sampled from a heterogeneous population, then there exist different sub-populations that all need to be described through their own graphs. What is more, if the sub-population (or “class”) labels are not available, unsupervised approaches must be implemented in order to correctly identify the classes and describe each of them with its own graph. In this work, we tackle the fairly new problem of Hierarchical GGM estimation for unlabelled heterogeneous populations. We explore several key axes to improve the estimation of the model parameters as well as the unsupervised identification of the sub-populations. Our goal is to ensure that the inferred conditional correlation graphs are as relevant and interpretable as possible. First - in the simple, homogeneous population case - we develop a composite method that combines the strengths of the two main state of the art paradigms to correct their weaknesses. For the unlabelled heterogeneous case, we propose to estimate a Mixture of GGM with an Expectation Maximisation (EM) algorithm. In order to improve the solutions of this EM algorithm, and avoid falling for sub-optimal local extrema in high dimension, we introduce a tempered version of this EM algorithm, that we study theoretically and empirically. Finally, we improve the clustering of the EM by taking into consideration the effect of external co-features on the position in space of the observed data
APA, Harvard, Vancouver, ISO, and other styles
29

Mohamed, Fathi Husain Alhadi. "Mitigation of Amplitude and Phase Distortion of Signals Under Modified Von Karman Turbulence Using Encrypted Chaos Waves." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1467943201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Engelke, Sebastian. "Brown-Resnick Processes: Analysis, Inference and Generalizations." Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-F1B3-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Larsson, Karl. "Scale-Space Methods as a Means of Fingerprint Image Enhancement." Thesis, Linköping University, Department of Science and Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2282.

Full text
Abstract:

The usage of automatic fingerprint identification systems as a means of identification and/or verification have increased substantially during the last couple of years. It is well known that small deviations may occur within a fingerprint over time, a problem referred to as template ageing. This problem, and other reasons for deviations between two images of the same fingerprint, complicates the identification/verification process, since distinct features may appear somewhat different in the two images that are matched. Commonly used to try and minimise this type of problem are different kinds of fingerprint image enhancement algorithms. This thesis tests different methods within the scale-space framework and evaluate their performance as fingerprint image enhancement methods.

The methods tested within this thesis ranges from linear scale-space filtering, where no prior information about the images is known, to scalar and tensor driven diffusion where analysis of the images precedes and controls the diffusion process.

The linear scale-space approach is shown to improve correlation values, which was anticipated since the image structure is flattened at coarser scales. There is however no increase in the number of accurate matches, since inaccurate features also tends to get higher correlation value at large scales.

The nonlinear isotropic scale-space (scalar dependent diffusion), or the edge- preservation, approach is proven to be an ill fit method for fingerprint image enhancement. This is due to the fact that the analysis of edges may be unreliable, since edge structure is often distorted in fingerprints affected by the template ageing problem.

The nonlinear anisotropic scale-space (tensor dependent diffusion), or coherence-enhancing, method does not give any overall improvements of the number of accurate matches. It is however shown that for a certain type of template ageing problem, where the deviating structure does not significantly affect the ridge orientation, the nonlinear anisotropic diffusion is able to accurately match correlation pairs that resulted in a false match before they were enhanced.

APA, Harvard, Vancouver, ISO, and other styles
32

Петранова, Марина Юрiївна. "Випадковi гауссовi процеси зi стiйкими кореляцiйними функцiями." Doctoral thesis, Київ, 2021. https://ela.kpi.ua/handle/123456789/40592.

Full text
Abstract:
Робота виконана на кафедрi прикладної математики Донецького нацiонального унiверситету iменi Василя Стуса Мiнiстерства освiти i науки України
иссертационная работа посвящена изучению случайных гауссо вых процессов с устойчивыми корреляционными функциями и их свойств.
APA, Harvard, Vancouver, ISO, and other styles
33

Hassani, Mujtaba. "CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49007.

Full text
Abstract:
Human activities have increased the concentration of CO2 into the atmosphere, thus it has caused global warming. Construction equipment are semi-stationary machines and spend at least 30% of its life time during idling. The majority of the construction equipment is diesel powered and emits toxic emission into the environment. In this work, the idling will be investigated through adopting several statistical regressions models to quantify the fuel consumption of construction equipment during idling. The regression models which are studied in this work: Multivariate Linear Regression (ML-R), Support Vector Machine Regression (SVM-R), Gaussian Process regression (GP-R), Artificial Neural Network (ANN), Partial Least Square Regression (PLS-R) and Principal Components Regression (PC-R). Findings show that pre-processing has a significant impact on the goodness of the prediction of the explanatory data analysis in this field. Moreover, through mean centering and application of the max-min scaling feature, the accuracy of models increased remarkably. ANN and GP-R had the highest accuracy (99%), PLS-R was the third accurate model (98% accuracy), ML-R was the fourth-best model (97% accuracy), SVM-R was the fifth-best (73% accuracy) and the lowest accuracy was recorded for PC-R (83% accuracy). The second part of this project estimated the CO2 emission based on the fuel used and by adopting the NONROAD2008 model.  Keywords:
APA, Harvard, Vancouver, ISO, and other styles
34

Blandino, Rémi. "Intrication de champs quantiques mesoscopiques pour les communications quantiques." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00827393.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre de l'information quantique avec des variables continues, en utilisant des états quantiques du champ électromagnétique. En combinant les outils propres aux variables discrètes, o'u la lumière est décrite en termes de photons, avec les outils des variables continues, où la lumière est décrite en termes de quadratures, nous pouvons étudier théoriquement et produire expérimentalement des états non-classiques, ainsi que des protocoles élémentaires d'information quantique. Ainsi, nous avons produit expérimentalement un état "chat de Schrödinger", superposition quantique de deux états lumineux quasi-classiques, sur lequel nous avons appliqué une porte quantique introduisant une phase dans la superposition. Nous avons ensuite analysé la qualité de cette porte en utilisant un modèle simple de notre expérience. Nous nous sommes ensuite intéressés aux corrélations quantiques, mesurées par la discorde quantique, pour une classe d'états particulièrement importants en information quantique. Nous avons quantifié la précision de nos mesures en les comparant aux bornes de Cramér-Rao classique et quantique. Enfin, nous avons étudié théoriquement l'utilisation d'un amplificateur quantique non-déterministe en cryptographie quantique. Cet amplificateur possède la propriété de pouvoir amplifier des états quantiques sans en amplifier le bruit quantique associé. Ainsi, nous avons montré qu'il permet une amélioration de la distance maximale de transmission d'une clé secrète, ainsi qu'une amélioration de la résistance au bruit introduit par le canal quantique.
APA, Harvard, Vancouver, ISO, and other styles
35

Snežana, Gordić. "Generalized stochastic processes with applications in equation solving." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110199&source=NDLTD&language=en.

Full text
Abstract:
In this dissertation stochastic processes are regarded in the framework of Colombeau-type algebras of generalized functions. Such processes are called Colombeau stochastic processes.The notion of point values of Colombeau stochastic processes in compactly supported generalized points is established. The Colombeau algebra of compactly supported generalized constants is endowed with the topology generated by sharp open balls. The measurability of the corresponding random variables with values in the Colombeau algebra of compactly supported generalized constants is shown.The generalized correlation function and the generalized characteristic function of Colombeau stochastic processes are introduced and their properties are investigated. It is shown that the characteristic function of classical stochastic processes can be embedded into the space of generalized characteristic functions. Examples of generalized characteristic function related to gaussian Colombeau stochasticprocesses are given. The structural representation of the generalized correlation function which is supported on the diagonal is given. Colombeau stochastic processes with independent values are introduced. Strictly stationary and weakly stationary Colombeau stochastic processes are studied. Colombeau stochastic processes with stationary increments are characterized via their stationarity of the gradient of the process.Gaussian stationary solutions are analyzed for linear stochastic partial differential equations with generalized constant coefficients in the framework of Colombeau stochastic processes.
U disertaciji se stohastički procesi posmatraju u okviru Kolomboove algebre uopštenih funkcija. Takve procese nazivamo Kolomboovi stohastički procesi. Pojam vrednosti Kolomboovog stohastičkog procesa u tačkama sa kompaktnim nosačem je uveden. Dokazana je merljivost odgovarajuće slučajne promenljive sa vrednostima u Kolomboovoj algebri uopštenih konstanti sa kompaktnim nosačem,  snabdevenom topologijom generisanom oštrim otvorenim loptama. Uopštena korelacijska funkcija i uopštena karakteristična funkcija Kolomboovog stohastičkog procesa su definisane i njihove osobine su izučavane. Pokazano je da  se karakteristična funkcija klasičnog stohastičkog procesa može potopiti u prostor uopštenih karakterističnih funkcija. Dati su primeri uopštenih karakterističnih funkcija  gausovskih Kolomboovih stohastičkih procesa. Data je strukturna reprezentacija uopštene korelacijske funkcije sa nosačem na dijagonali. Kolomboovi stohastički procesi sa nezavisnim vrednostima su predstavljeni. Izučavani su strogo stacionarni i  slabo stacionarni Kolomboovi stohastički procesi. Kolomboovi stohastički procesi sa stacionarnim priraštajima su okarakterisani preko stacionarnosti gradijenta procesa. Gausovska stacionarna rešenja za linearnu stohastičku parcijalnu diferencijalnu jednačinu sa uopštenim konstantnim koeficijentima su analizirana u okvirima Kolomboovih stohastičkih procesa.
APA, Harvard, Vancouver, ISO, and other styles
36

Chang, Weijen, and 詹惟任. "Correlation of Fisher-Wright Model and Inverse Gaussian Distribution." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/60326536752542861150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, I.-sheng, and 陳易聖. "Gaussian Integer Sequences of Length 4n with Ideal Periodic Auto-Correlation Function." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/b2f45x.

Full text
Abstract:
碩士
國立中山大學
通訊工程研究所
97
Many researchers had developed polyphase sequences, so called “perfect sequence” or “ideal sequence”, with ideal periodic auto-correlation function. There are lots of applications of communication system depends on the sequences with good auto-correlation property, i.e., synchronization, channel estimation and multiple access. These sequences cannot maintain the ideal property in implementation, because of the error of quantization in digital signal processing of transmitter. On the contrary, we develop a novel set of perfect sequences, Gaussian Integer Perfect Sequence (GIPS), which only contains Gaussian integers. In this paper, we construct them by linear combination and cyclic shift of the eight base sequences. We present the design and basic properties of the sequences. Furthermore, the design method of sequences with the smallest dynamic range is presented.
APA, Harvard, Vancouver, ISO, and other styles
38

Klimovsky, Anton [Verfasser]. "Sums of correlated exponentials: two types of Gaussian correlation structures / Anton Klymovskiy (Klimovsky)." 2008. http://d-nb.info/990507866/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Maadooliat, Mehdi. "Dimension Reduction and Covariance Structure for Multivariate Data, Beyond Gaussian Assumption." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9731.

Full text
Abstract:
Storage and analysis of high-dimensional datasets are always challenging. Dimension reduction techniques are commonly used to reduce the complexity of the data and obtain the informative aspects of datasets. Principal Component Analysis (PCA) is one of the commonly used dimension reduction techniques. However, PCA does not work well when there are outliers or the data distribution is skewed. Gene expression index estimation is an important problem in bioinformatics. Some of the popular methods in this area are based on the PCA, and thus may not work well when there is non-Gaussian structure in the data. To address this issue, a likelihood based data transformation method with a computationally efficient algorithm is developed. Also, a new multivariate expression index is studied and the performance of the multivariate expression index is compared with the commonly used univariate expression index. As an extension of the gene expression index estimation problem, a general procedure that integrates data transformation with the PCA is developed. In particular, this general method can handle missing data and data with functional structure. It is well-known that the PCA can be obtained by the eigen decomposition of the sample covariance matrix. Another focus of this dissertation is to study the covariance (or correlation) structure under the non-Gaussian assumption. An important issue in modeling the covariance matrix is the positive definiteness constraint. The modified Cholesky decomposition of the inverse covariance matrix has been considered to address this issue in the literature. An alternative Cholesky decomposition of the covariance matrix is considered and used to construct an estimator of the covariance matrix under multivariate-t assumption. The advantage of this alternative Cholesky decomposition is the decoupling of the correlation and the variances.
APA, Harvard, Vancouver, ISO, and other styles
40

OU, CHI-LIANG, and 歐啟良. "On the Impact of Correlation Coefficient for OFDM Systems over Correlated Branches with Gaussian Distribution." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/43571126306916858864.

Full text
Abstract:
碩士
大葉大學
電信工程學系碩士在職專班
94
The performance analysis is for OFDM (orthogonal frequency division multiplexing) system which is working in correlated-Nakagami-m channel with correlated branches in this paper. The correlated-branch is assumed correlated-Gaussian model. The received signals will cause correlation phenomenon in OFDM system when the antennas are with closely space. From the results analyzed in this paper, it is worth to note that not only the correlation will impact on the performance, but the fading parameter of Nakagami-m distributed will also dominate the system performance of OFDM system. By computing the bit error rate of the receiver, we study the relation between the performance and Nakagami-m channel of fading parameter m value and correlated-Gaussian coefficient. Keywords:OFDM、correlated-Nakagami-m channel、correlated-Gaussian coefficient
APA, Harvard, Vancouver, ISO, and other styles
41

Ai-HuaLai and 賴愛華. "Control Charts for Profile Monitoring in the Presence of Within-Profile Correlation Using Inverse Gaussian Process Models." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/bf4jk3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Venter, Martinette. "Risk aggregation and capital allocation using copulas / Martinette Venter." Thesis, 2014. http://hdl.handle.net/10394/15745.

Full text
Abstract:
Banking is a risk and return business; in order to obtain the desired returns, banks are required to take on risks. Following the demise of Lehman Brothers in September 2008, the Basel III Accord proposed considerable increases in capital charges for banks. Whilst this ensures greater economic stability, banks now face an increasing risk of becoming capital inefficient. Furthermore, capital analysts are not only required to estimate capital requirements for individual business lines, but also for the organization as a whole. Copulas are a popular technique to model joint multi-dimensional problems, as they can be applied as a mechanism that models relationships among multivariate distributions. Firstly, a review of the Basel Capital Accord will be provided. Secondly, well known risk measures as proposed under the Basel Accord will be investigated. The penultimate chapter is dedicated to the theory of copulas as well as other measures of dependence. The final chapter presents a practical illustration of how business line losses can be simulated by using the Gaussian, Cauchy, Student t and Clayton copulas in order to determine capital requirements using 95% VaR, 99% VaR, 95% ETL, 99% ETL and StressVaR. The resultant capital estimates will always be a function of the choice of copula, the choice of risk measure and the correlation inputs into the copula calibration algorithm. The choice of copula, the choice of risk measure and the conservativeness of correlation inputs will be determined by the organization’s risk appetite.
Sc (Applied Mathematics), North-West University, Potchefstroom Campus, 2014
APA, Harvard, Vancouver, ISO, and other styles
43

Diwan, C. Yogesh. ""Phase-Correlation Based Displacemnt-Metrology" - Few Investigations." Thesis, 2005. https://etd.iisc.ac.in/handle/2005/1464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Diwan, C. Yogesh. ""Phase-Correlation Based Displacemnt-Metrology" - Few Investigations." Thesis, 2005. http://etd.iisc.ernet.in/handle/2005/1464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hosseini, Sara. "Quantum discord, EPR steering and Bell-type correlations for secure CV quantum communications." Phd thesis, 2017. http://hdl.handle.net/1885/112650.

Full text
Abstract:
Quantum states can be correlated in ways beyond what is possible for classical states. These correlations are considered as the main resource for quantum computation and communication tasks. In this thesis, I present my studies on the different forms of Quantum Correlations known as "Quantum Discord", "Einstein-Podolsky-Rosen(EPR) Steering" and "Bel-type correlations" in the continuous-variable quantum states and investigate their practical applications for the secure quantum communication. While previously quantum entanglement was considered as the only form of quantum correlation, in the recent years a notion known as quantum discord which captures extra quantum correlations beyond entanglement was introduced by Ollivier and Zurek. This sort of non-classicality that can exist even in separable states, has raised so much aspiration for the potential applications, as they are less fragile than the entangled states. Therefore, of especial interest is to know if a bipartite quantum state is discordant or not. In this thesis I will describe the simple and efficient experimental technique that we have introduced and experimentally implemented to verify quantum discord in unknown Gaussian states and a certain class of non-Gaussian states. According to our method, the peak separation between the marginal distributions of one subsystem conditioned on two different outcomes of homodyne measurement conducted on the other subsystem is an indication of nonzero quantum discord. We implemented this method experimentally by preparing bipartite Gaussian and non-Gaussian states and proved nonzero quantum discord in all the prepared states. Though quantum key distribution has become a mature technology, the possibility of hacking the devices used in the quantum communications has motivated the scientists to develop the schemes where one or non of the devices used by the communicating parties need to be trusted. Quantum correlations are the key to develop these schemes. Particularly, EPR steering is connected to the one-sided-deviceindependent quantum key distribution in which devices of one party are solely trusted and Bell-type correlations to the fully device-independent quantum key distribution where non of the apparatuses of the communicating parties is trusted. Here, I will present the result of our theoretical and experimental research to develop one-sided-device-independent quantum key distribution in continuous variables. We identify all Gaussian protocols that can in principle be one-sided-device independent. This consists of 6 protocols out of 16 possible Gaussian protocols, which surprisingly includes the protocol that applies only coherent states. We experimentally implemented both the entanglement-based and coherent state protocols and manifested their loss tolerance. Our results open the door for the practical secure quantum communications, asserting the link between the EPR-steering andone-sided-device-independence. Due to the maturity of quantum information using continuous variables, it is important to develop a Bell-type inequality in this regime. Despite its fundamental importance, Bell-type correlation is linked to the device-independent quantum key distribution. I developed a computer modelling based on the proposal of ref [1, 2] to demonstrate continuous-variable Bell-type correlation. The results of my computer simulations that are presented in this thesis show the feasibility of these proposals, which makes the real-life implementation of continuous-variable device-independent quantum key distribution possible.
APA, Harvard, Vancouver, ISO, and other styles
46

Strokorb, Kirstin. "Characterization and construction of max-stable processes." Doctoral thesis, 2013. http://hdl.handle.net/11858/00-1735-0000-0001-BB44-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sahasranand, K. R. "Hypothesis Testing under Communication Constraints - Theory and an Application in IoT." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5574.

Full text
Abstract:
Applications in the Internet of Things (IoT) often demand enabling low-compute devices to perform distributed inference and testing by communicating over a low bandwidth link. This gives rise to a plethora of new problems which may broadly be termed resource-constrained statistical inference problems. In this thesis, we consider two such problems. In the first part of the thesis, we study the following distributed hypothesis testing problem. Two parties observing sequences of uniformly distributed bits want to determine if their bits were generated independently or not. To that end, the first party communicates to the second. A simple communication scheme involves taking as few sample bits as determined by the sample complexity of independence testing and sending it to the second party. But is there a scheme that uses fewer bits of communication than the sample complexity, perhaps by observing more sample bits? We show that the answer to this question is in the affirmative. More generally, for any given joint distribution, we present a distributed independence test that uses linear correlation between functions of the observed random variables. Furthermore, we provide lower bounds for the general setting that use hypercontractivity and reverse hypercontractivity to obtain a measure change bound between the joint and the independent distributions. The resulting bounds are tight for both a binary symmetric source and a Gaussian symmetric source. The proposed scheme is then extended to handle high dimensional correlation testing with interactive communication, wherein one party observes a Gaussian vector X and the other party observes a jointly Gaussian scalar Y, and we seek to test if the norm of the vector of correlation between X and Y exceeds a given value or is it 0. We provide corresponding lower bounds to establish the optimality of the proposed scheme. Furthermore, we derive a lower bound which implies that distributed correlation testing requires less communication than distributed estimation of the correlation vector. In the second part of the thesis, we study streaming compression of electrical signals sampled at a very high frequency by Intelligent Electronic Devices (IEDs), enabled to capture anomalous signal behavior. Under normal operation, this oversampling is redundant and leads to excessive data being stored or transmitted. This gives rise to a new compression problem where the collected samples should be further subsampled and quantized based on the presence of an anomaly in the underlying signal. We propose an Anomaly-aware Compressive Sampler (ACS) which tests the signal for the presence of an anomaly in a block of samples, and subsamples in a hierarchical manner to retain the desired sampling rate. ACS has been designed keeping hardware constraints in mind, using integer operations, an appropriate bit-packing, a simple iterated delta filter, and a streaming data pipeline. ACS competes with the state-of-the-art algorithm for the better-behaved transmission system data from DOE/EPRI, and outperforms it significantly on real-time distribution system data recorded in our laboratory. ACS is lightweight and was implemented on an ARM processor. Further, we present a mathematical analysis of the anomaly detection module of ACS. Finally, the performance of the proposed scheme in compressing a nonstationary signal with frequency band uncertainty is studied, with a focus on the dependence of the compression ratio and reconstruction error on the oversampling rate. We modify a zero-crossings-based compression scheme proposed in literature for bandlimited signals to incorporate resolution of frequency band uncertainty using our anomaly detection procedure. While this new scheme is theoretically appealing, we point out some of its limitations when it comes to implementation.
APA, Harvard, Vancouver, ISO, and other styles
48

Howard, John Edward. "The detection reliability of a single-bit sampling cross- correlator for detecting random Gaussian reflections." Thesis, 2015. http://hdl.handle.net/10539/18198.

Full text
Abstract:
A Thesis Submitted to th e Faculty of Engineering , University of th e Witwatersrand, Johannesburg , for the Degree of Doctor of Philosophy Johannesburg 1974
In this thesis the detection reliability of a single-bit, digital, sampling cross-correlator used for detecting either single-hit or analog bandlimited Gaussian signals is investigated. This is done by deriving the exact output probability mass function of the cross-correlator, this directly yields the Detection and False Alarm probabilities. The cross-correlator output mass function is derived for the following cases: (a) a single-bit bandlimited Gaussian signal cross-correlated with an attenuated reflection corrupted by wideband Gaussian noise, and (b) and in many applications the interfering signal can be periodic in nature; thus the output mass function is also considered. (c) for a single-bit bandlimited Gaussian signal cross-correlated with no attenuated reflection corrupted by a random phase sinusoid. In all cases except (d), the cross-correlated function is derived first, and then the probability mass functions are derived for both burst and continuous transmitted signal operation. In (d) the cross-correlation function cannot he derived in a closed form, and a series approximation is given. However, the zero delay (i.e. peak) cross-correlation function is derived exactly, and this yields information on the detection probabilities to be expected. The cross-correlator output probability mass functions are discussed qualitatively in this case. It is found that in general the detection reliability obtained using single-bit bandlimited Gaussian signals is higher than that achievable with analog signals, and that a random phase sine wave has a more adverse effect on the cross-correlator's detection performance than wideband Gaussian noire has. The theoretical derivations of (a), (b) and (c) are verified by extremely close agreement with experimental results taken on a specially built single-bit, sampling cross-correlator. The cross-correlator's performance under multiple reflection condit¬ions is considered, and the cross-correlation function of a single-bit or an analog bandlimited Gaussian signal with two attenuated reflections corrupted by wideband Gaussian noise is derived. An extension of the theory to more than two reflections is discussed in both cases. The derivation of the cross-correlator output mass functions is considered for both burst and continuous signal operation. It is shown that under conditions where there are two overlapping single-bit reflection, in a low extraneous noise environment, there is a high probability of missing the smaller of the two reflections completely, even though it may be only slighter smaller than the larger one. This defect does not occur with analog Gaussian signals, and, although the peaks in their case are not so sharp or well-defined, under these conditions analog signals offer distinct advantage over single-bit signals. The practical application of the detection scheme to acoustics is briefly discussed, and it is found that the Gaussian signal centre freq¬uency and the cross-correlator sampling frequency must he matched. A sampling frequency of between one and ten times the signal centre frequency yields satisfactory results. There are several constraints on the signal bandwidth, and octave bandwidth, are found to offer a good compromise.
APA, Harvard, Vancouver, ISO, and other styles
49

(11201085), Ronald F. Agyei. "INVESTIGATING DAMAGE IN SHORT FIBER REINFORCED COMPOSITES." Thesis, 2021.

Find full text
Abstract:
In contrast to traditional steel and aluminum, short fiber reinforced polymer composites (SFRCs) provide promising alternatives in material selection for automotive and aerospace applications due to their potential to decrease weight while maintaining excellent mechanical properties. However, uncertainties about the influence of complex microstructures and defects on mechanical response have prevented widespread adoption of material models for
SFRCs. In order to build confidence in models’ predictions requires deepened insight into the heterogenous damage mechanisms. Therefore, this research takes a micro-mechanics standpoint of assessing the damage behavior of SFRCs, particularly micro-void nucleation at the fiber tips, by passing information of microstructural attributes within neighborhoods of incipient damage and non-damage sites, into a framework that establishes correlations between the microstructural information and damage. To achieve this, in-situ x-ray tomography of the gauge sections of two cylindrical injection molded dog-bone specimens, composed of E-glass fibers in a polypropylene matrix, was conducted while the specimens were monotonically loaded until failure. This was followed by (i) the development of microstructural characterization frameworks for segmenting fiber and porosity features in 3D images, (ii) the development of a digital volume correlation informed damage detection framework that confines search spaces of potential damage sites, and (iii) the use of a Gaussian process classification framework to explore the dependency of micro-void nucleation on neighboring microstructural defects by ranking each of their contributions. Specifically, the analysis considered microstructural metrics related to the closest fiber, the closest pore, and the local stiffness, and the results demonstrated that less stiff resin rich areas were more relevant for micro-void nucleation than clustered fiber tips, T-intersections of fibers, or varying porosity volumes. This analysis provides a ranking of microstructural metrics that induce microvoid nucleation, which can be helpful for modelers to validate their predictions on proclivity of damage initiation in the presence of wide distributions of microstructural features and
manufacturing defects.
APA, Harvard, Vancouver, ISO, and other styles
50

Aibassov, Gizatulla. "Optimization of a petroleum producing assets portfolio: development of an advanced computer model." 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2567.

Full text
Abstract:
Portfolios of contemporary integrated petroleum companies consist of a few dozen Exploration and Production (E&P) projects that are usually spread all over the world. Therefore, it is important not only to manage individual projects by themselves, but to also take into account different interactions between projects in order to manage whole portfolios. This study is the step-by-step representation of the method of optimizing portfolios of risky petroleum E&P projects, an illustrated method based on Markowitz’s Portfolio Theory. This method uses the covariance matrix between projects’ expected return in order to optimize their portfolio. The developed computer model consists of four major modules. The first module generates petroleum price forecasts. In our implementation we used the price forecasting method based on Sequential Gaussian Simulation. The second module, Monte Carlo, simulates distribution of reserves and a set of expected production profiles. The third module calculates expected after tax net cash flows and estimates performance indicators for each realization, thus yielding distribution of return for each project. The fourth module estimates covariance between return distributions of individual projects and compiles them into portfolios. Using results of the fourth module, analysts can make their portfolio selection decisions. Thus, an advanced computer model for optimization of the portfolio of petroleum assets has been developed. The model is implemented in a MATLAB® computational environment and allows optimization of the portfolio using three different return measures (NPV, GRR, PI). The model has been successfully applied to the set of synthesized projects yielding reasonable solutions in all three return planes. Analysis of obtained solutions has shown that the given computer model is robust and flexible in terms of input data and output results. Its modular architecture allows further inclusion of complementary “blocks” that may solve optimization problems utilizing different measures (than considered) of risk and return as well as different input data formats.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography