Rozprawy doktorskie na temat „Dimensionality”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Dimensionality.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Dimensionality”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ariu, Kaito. "Online Dimensionality Reduction". Licentiate thesis, KTH, Reglerteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-290791.

Pełny tekst źródła
Streszczenie:
In this thesis, we investigate online dimensionality reduction methods, wherethe algorithms learn by sequentially acquiring data. We focus on two specificalgorithm design problems in (i) recommender systems and (ii) heterogeneousclustering from binary user feedback. (i) For recommender systems, we consider a system consisting of m users and n items. In each round, a user,selected uniformly at random, arrives to the system and requests a recommendation. The algorithm observes the user id and recommends an itemfrom the item set. A notable restriction here is that the same item cannotbe recommended to the same user more than once, a constraint referred toas a no-repetition constraint. We study this problem as a variant of themulti-armed bandit problem and analyze regret with the various structurespertaining to items and users. We devise fundamental limits of regret andalgorithms that can achieve the limits order-wise. The analysis explicitlyhighlights the importance of each component of regret. For example, we candistinguish the regret due to the no-repetition constraint, that generated tolearn the statistics of user’s preference for an item, and that generated tolearn the low-dimensional space of the users and items were shown. (ii) Inthe clustering with binary feedback problem, the objective is to classify itemssolely based on limited user feedback. More precisely, users are just askedsimple questions with binary answers. A notable difficulty stems from theheterogeneity in the difficulty in classifying the various items (some itemsrequire more feedback to be classified than others). For this problem, wederive fundamental limits of the cluster recovery rates for both offline andonline algorithms. For the offline setting, we devise a simple algorithm thatachieves the limit order-wise. For the online setting, we propose an algorithm inspired by the lower bound. For both of the problems, we evaluatethe proposed algorithms by inspecting their theoretical guarantees and usingnumerical experiments performed on the synthetic and non-synthetic dataset.
Denna avhandling studerar algoritmer för datareduktion som lär sig från sekventiellt inhämtad data. Vi fokuserar speciellt på frågeställningar som uppkommer i utvecklingen av rekommendationssystem och i identifieringen av heterogena grupper av användare från data. För rekommendationssystem betraktar vi ett system med m användare och n objekt. I varje runda observerar algoritmen en slumpmässigt vald användare och rekommenderar ett objekt. En viktig begränsning i vår problemformuleringar att rekommendationer inte får upprepas: samma objekt inte kan rekommenderas till samma användartermer än en gång. Vi betraktar problemet som en variant av det flerarmadebanditproblemet och analyserar systemprestanda i termer av "ånger” under olika antaganden.Vi härleder fundamentala gränser för ånger och föreslår algoritmer som är (ordningsmässigt) optimala. En intressant komponent av vår analys är att vi lyckas att karaktärisera hur vart och ett av våra antaganden påverkar systemprestandan. T.ex. kan vi kvantifiera prestandaförlusten i ånger på grund av att rekommendationer inte får upprepas, på grund avatt vi måste lära oss statistiken för vilka objekt en användare är intresserade av, och för kostnaden för att lära sig den lågdimensionella rymden för användare och objekt. För problemet med hur man bäst identifierar grupper av användare härleder vi fundamentala gränser för hur snabbt det går att identifiera kluster. Vi gör detta för algoritmer som har samtidig tillgång till all data och för algoritmer som måste lära sig genom sekventiell inhämtning av data. Med tillgång till all data kan vår algoritm uppnå den optimala prestandan ordningsmässigt. När data måste inhämtas sekventiellt föreslår vi en algoritm som är inspirerad av den nedre gränsen på vad som kan uppnås. För båda problemen utvärderar vi de föreslagna algoritmerna numeriskt och jämför den praktiska prestandan med de teoretiska garantierna.

QC 20210223

Style APA, Harvard, Vancouver, ISO itp.
2

LEGRAMANTI, SIRIO. "Bayesian dimensionality reduction". Doctoral thesis, Università Bocconi, 2021. http://hdl.handle.net/11565/4035711.

Pełny tekst źródła
Streszczenie:
No abstract available
We are currently witnessing an explosion in the amount of available data. Such growth involves not only the number of data points but also their dimensionality. This poses new challenges to statistical modeling and computations, thus making dimensionality reduction more central than ever. In the present thesis, we provide methodological, computational and theoretical advancements in Bayesian dimensionality reduction via novel structured priors. Namely, we develop a new increasing shrinkage prior and illustrate how it can be employed to discard redundant dimensions in Gaussian factor models. In order to make it usable for larger datasets, we also investigate variational methods for posterior inference under this proposed prior. Beyond traditional models and parameter spaces, we also provide a different take on dimensionality reduction, focusing on community detection in networks. For this purpose, we define a general class of Bayesian nonparametric priors that encompasses existing stochastic block models as special cases and includes promising unexplored options. Our Bayesian approach allows for a natural incorporation of node attributes and facilitates uncertainty quantification as well as model selection.
Style APA, Harvard, Vancouver, ISO itp.
3

Kelly, Wallace Eugene. "Dimensionality in fuzzy systems". [S.l.] : Universität Stuttgart , Fakultätsübergreifend / Sonstige Einrichtung, 1997. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB6783685.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bolelli, Maria Virginia. "Diffusion Maps for Dimensionality Reduction". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18246/.

Pełny tekst źródła
Streszczenie:
In this thesis we present the diffusion maps, a framework based on diffusion processes for finding meaningful geometric descriptions of data sets. A diffusion process can be described via an iterative application of the heat kernel which has two main characteristics: it satisfies a Markov semigroup property and its level sets encode all geometric features of the space. This process, well known in regular manifolds, has been extended to general data set by Coifman and Lafon. They define a diffusion kernel starting from the geometric properties of the data and their density properties. This kernel will be a compact operator, and the projection on its eigenvectors at different instant of time, provides a family of embeddings of a dataset into a suitable Euclidean space. The projection on the first eigenvectors, naturally leads to a dimensionality reduction algorithm. Numerical implementation is provided on different data set.
Style APA, Harvard, Vancouver, ISO itp.
5

Khosla, Nitin, i n/a. "Dimensionality Reduction Using Factor Analysis". Griffith University. School of Engineering, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061010.151217.

Pełny tekst źródła
Streszczenie:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
Style APA, Harvard, Vancouver, ISO itp.
6

Barlow, Thomas W. "Reduced dimensionality in molecular representation". Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320639.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ahrens, Johan. "Non-contextual inequalities and dimensionality". Doctoral thesis, Stockholms universitet, Fysikum, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-116832.

Pełny tekst źródła
Streszczenie:
This PhD-thesis is based on the five experiments I have performed during mytime as a PhD-student. Three experiments are implementations of non-contextualinequalities and two are implementations of witness functions for classical- andquantum dimensions of sets of states. A dimension witness is an operator function that produce a value whenapplied to a set of states. This value has different upper bounds depending onthe dimension of the set of states and also depending on if the states are classicalor quantum. Therefore a dimension witness can only give a lower bound on thedimension of the set of states.The first dimension witness is based on the CHSH-inequality and has theability of discriminating between classical and quantum sets of states of two andthree dimensions, it can also indicate if a set of states must be of dimension fouror higher.The second dimension witness is based on a set theoretical representationof the possible combinations of states and measurements and grows with thedimension of the set of states you want to be able to identify, on the other handthere is a formula for expanding it to arbitrary dimension.Non-contextual hidden variable models is a family of hidden variable modelswhich include local hidden variable models, so in a sence non-contextual inequal-ities are a generalisation of Bell-inequalities. The experiments presented in this thesis all use single particle quantum systems.The first experiment is a violation of the KCBS-inequality, this is the simplest correlation inequality which is violated by quantum mechanics.The second experiment is a violation of the Wright-inequality which is the simplest inequality violated by quantum mechanics, it contains only projectors and not correlations.The final experiment of the thesis is an implementation of a Hardy-like equality for non-contextuality, this means that the operators in the KCBS-inequality have been rotated so that one term in the sum will be zero for all non-contextual hidden variable models and we get a contradiction since quantum mechanicsgives a non-zero value for all terms.
Denna doktorsavhandling är baserad på fem experiment jag har utfört undermin tid som doktorand. Tre experiment är realiseringar av icke-kontextuella olikheter och de två övriga är realiseringar av vittnesfunktioner för klassiska och kvantmekaniska dimensioner hos en uppsättning tillstånd. Ett dimensionsvittne är en funktion som tar en uppsättning tillstånd och producerar ett värde. Detta värde har olika övre gränser beroende på dimensionen hos uppsättningen tillstånd och beror även på om tillstånden är klassiska eller kvantmekaniska. På grund av detta kan ett dimensionsvittne endast ge en undre uppskattning på dimensionen hos en uppsättning tillstånd.Det första dimensionsvittnet är baserat på CHSH-olikheten och kan urskiljamellan klassiska och kvantmekaniska tillstånd av två och tre dimensioner, det kan även avgöra ifall uppsättningen av tillstånd har dimension fyra eller högre. Det andra dimensionsvittnet är baserat på en sannolikhetsteoretisk representation av möjliga kombinationer av tillstånd och mätningar. Detta vittne växer med antalet dimensioner som skall kunna urskiljas, å andra sidan finns det en formel för hur man kan expandera vittnet till godtycklig dimension.Icke-kontextuella gömda-variabel-teorier är en familj av gömda-variabel-teorier som innefattar lokala gömda-variabel-teorier, så i en bemärkelse är icke-kontextuella olikheter en generalisering av Bell-olikheter. Experimenteni denna avhandling använder sig alla av en-partikel-kvantsystem. Det första experimentet är en brytning av KCBS-olikheten, det är den en-klaste olikheten baserad på korrelationer som kan brytas av kvantmekanik. Det andra experimentet är en brytning av Wright-olikheten som är den enklaste olikheten som kan brytas av kvantmekanik, den innehåller endast projektorer inga korrelationer. Det sista experimentet i avhandlingen är en realisering av en Hardy-lik olikhet för icke-kontextualitet. Detta betyder att operatorerna i KCBS-olikheten har roterats så att en term i summan är identiskt noll för alla icke-kontextuella gömda-variabel-teorier och vi får en motsägelse då kvantmekaniken ger ettnoll-skiljt värde för alla termer.
Style APA, Harvard, Vancouver, ISO itp.
8

Hyde, Susan Margaret. "Understanding dimensionality in health care". Thesis, Manchester Metropolitan University, 2014. http://e-space.mmu.ac.uk/326230/.

Pełny tekst źródła
Streszczenie:
In recent years, the quality of non-clinical elements of health care has been challenged in the UK. While dimensions such as the environment, communications, reliability, access, etc., all contribute to making patients feel more at ease during a time when they are at their most vulnerable, they often fall short of what they should be. This paper supports the shift towards greater emphasis on understanding the functional elements of health services in an effort to improve patient experience and outcomes. While there is an abundance of literature discussing the evaluation of service quality, much of this focuses on the SERVQUAL model and, although there is increasing debate about its relevance across sectors, no alternative has been offered. This paper argues that the model lacks substance as a tool to evaluate quality in the complex environment of health care. The study embraced multiple methods to acquire a greater understanding of service quality constructs within the health care sector. It was carried out in three phases. The first comprised critical incident interviews with service users, which highlighted both successes and failings in their care. This was followed by staff interviews and focus groups representing a cross section of the public, providing an insight into how different groups perceive quality. The data was used in the design of a detailed questionnaire which attracted in excess of 1,000 responses. Factor analysis was then used to develop a framework of key elements relevant both to hospital settings and to those services provided in the community such as general practice. The findings provide a four-factor model comprising: trust, access, a caring approach and professionalism, three of which are comprised primarily of human interactions. These findings suggest that although the original SERVQUAL ten-item model does have some relevance, with the adapted five-item model being far too simplistic, neither fully addresses the needs of a sector as unique and high contact as health care. The results point the way for further research to develop a detailed model to evaluate service quality in health care settings.
Style APA, Harvard, Vancouver, ISO itp.
9

Vamulapalli, Harika Rao. "On Dimensionality Reduction of Data". ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1211.

Pełny tekst źródła
Streszczenie:
Random projection method is one of the important tools for the dimensionality reduction of data which can be made efficient with strong error guarantees. In this thesis, we focus on linear transforms of high dimensional data to the low dimensional space satisfying the Johnson-Lindenstrauss lemma. In addition, we also prove some theoretical results relating to the projections that are of interest when applying them in practical applications. We show how the technique can be applied to synthetic data with probabilistic guarantee on the pairwise distance. The connection between dimensionality reduction and compressed sensing is also discussed.
Style APA, Harvard, Vancouver, ISO itp.
10

Widemann, David P. "Dimensionality reduction for hyperspectral data". College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8448.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
11

Khosla, Nitin. "Dimensionality Reduction Using Factor Analysis". Thesis, Griffith University, 2006. http://hdl.handle.net/10072/366058.

Pełny tekst źródła
Streszczenie:
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Engineering
Full Text
Style APA, Harvard, Vancouver, ISO itp.
12

Sætrom, Jon. "Reduction of Dimensionality in Spatiotemporal Models". Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11247.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Ghodsi, Boushehri Ali. "Nonlinear Dimensionality Reduction with Side Information". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1020.

Pełny tekst źródła
Streszczenie:
In this thesis, I look at three problems with important applications in data processing. Incorporating side information, provided by the user or derived from data, is a main theme of each of these problems.

This thesis makes a number of contributions. The first is a technique for combining different embedding objectives, which is then exploited to incorporate side information expressed in terms of transformation invariants known to hold in the data. It also introduces two different ways of incorporating transformation invariants in order to make new similarity measures. Two algorithms are proposed which learn metrics based on different types of side information. These learned metrics can then be used in subsequent embedding methods. Finally, it introduces a manifold learning algorithm that is useful when applied to sequential decision problems. In this case we are given action labels in addition to data points. Actions in the manifold learned by this algorithm have meaningful representations in that they are represented as simple transformations.
Style APA, Harvard, Vancouver, ISO itp.
14

Collins, David Wesley. "Difficulty and dimensionality in mental rotation". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ28553.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Merola, Giovanni Maria. "Dimensionality reduction methods in multivariate prediction". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0022/NQ32847.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Musco, Cameron N. (Cameron Nicholas). "Dimensionality reduction for k-means clustering". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101473.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 123-131).
In this thesis we study dimensionality reduction techniques for approximate k-means clustering. Given a large dataset, we consider how to quickly compress to a smaller dataset (a sketch), such that solving the k-means clustering problem on the sketch will give an approximately optimal solution on the original dataset. First, we provide an exposition of technical results of [CEM+15], which show that provably accurate dimensionality reduction is possible using common techniques such as principal component analysis, random projection, and random sampling. We next present empirical evaluations of dimensionality reduction techniques to supplement our theoretical results. We show that our dimensionality reduction algorithms, along with heuristics based on these algorithms, indeed perform well in practice. Finally, we discuss possible extensions of our work to neurally plausible algorithms for clustering and dimensionality reduction. This thesis is based on joint work with Michael Cohen, Samuel Elder, Nancy Lynch, Christopher Musco, and Madalina Persu.
by Cameron N. Musco.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
17

Kalantan, Zakiah Ibrahim. "Methods for estimation of intrinsic dimensionality". Thesis, Durham University, 2014. http://etheses.dur.ac.uk/9500/.

Pełny tekst źródła
Streszczenie:
Dimension reduction is an important tool used to describe the structure of complex data (explicitly or implicitly) through a small but sufficient number of variables, and thereby make data analysis more efficient. It is also useful for visualization purposes. Dimension reduction helps statisticians to overcome the ‘curse of dimensionality’. However, most dimension reduction techniques require the intrinsic dimension of the low-dimensional subspace to be fixed in advance. The availability of reliable intrinsic dimension (ID) estimation techniques is of major importance. The main goal of this thesis is to develop algorithms for determining the intrinsic dimensions of recorded data sets in a nonlinear context. Whilst this is a well-researched topic for linear planes, based mainly on principal components analysis, relatively little attention has been paid to ways of estimating this number for non–linear variable interrelationships. The proposed algorithms here are based on existing concepts that can be categorized into local methods, relying on randomly selected subsets of a recorded variable set, and global methods, utilizing the entire data set. This thesis provides an overview of ID estimation techniques, with special consideration given to recent developments in non–linear techniques, such as charting manifold and fractal–based methods. Despite their nominal existence, the practical implementation of these techniques is far from straightforward. The intrinsic dimension is estimated via Brand’s algorithm by examining the growth point process, which counts the number of points in hyper-spheres. The estimation needs to determine the starting point for each hyper-sphere. In this thesis we provide settings for selecting starting points which work well for most data sets. Additionally we propose approaches for estimating dimensionality via Brand’s algorithm, the Dip method and the Regression method. Other approaches are proposed for estimating the intrinsic dimension by fractal dimension estimation methods, which exploit the intrinsic geometry of a data set. The most popular concept from this family of methods is the correlation dimension, which requires the estimation of the correlation integral for a ball of radius tending to 0. In this thesis we propose new approaches to approximate the correlation integral in this limit. The new approaches are the Intercept method, the Slop method and the Polynomial method. In addition we propose a new approach, a localized global method, which could be defined as a local version of global ID methods. The objective of the localized global approach is to improve the algorithm based on a local ID method, which could significantly reduce the negative bias. Experimental results on real world and simulated data are used to demonstrate the algorithms and compare them to other methodology. A simulation study which verifies the effectiveness of the proposed methods is also provided. Finally, these algorithms are contrasted using a recorded data set from an industrial melter process.
Style APA, Harvard, Vancouver, ISO itp.
18

Law, Hiu Chung. "Clustering, dimensionality reduction, and side information". Diss., Connect to online resource - MSU authorized users, 2006.

Znajdź pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Michigan State University. Dept. of Computer Science & Engineering, 2006.
Title from PDF t.p. (viewed on June 19, 2009) Includes bibliographical references (p. 296-317). Also issued in print.
Style APA, Harvard, Vancouver, ISO itp.
19

Vasiloglou, Nikolaos. "Isometry and convexity in dimensionality reduction". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28120.

Pełny tekst źródła
Streszczenie:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: David Anderson; Committee Co-Chair: Alexander Gray; Committee Member: Anthony Yezzi; Committee Member: Hongyuan Zha; Committee Member: Justin Romberg; Committee Member: Ronald Schafer.
Style APA, Harvard, Vancouver, ISO itp.
20

Paganelli, Mattia. "Finitude, possibility, dimensionality : aesthetics after complexity". Thesis, Birmingham City University, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.720003.

Pełny tekst źródła
Streszczenie:
This thesis proposes a reconceptualisation of aesthetics moving from the irreversibility of emergence as described by the theory of complexity. Existing aesthetic platforms reflect a binary ontology that perpetuates the oppositions of concept and object or discourse and practice, thus projecting aesthetics as a contingent surface. The metaphysical split of material and immaterial is therefore maintained as the ultimate structure of sense and the sensual is still represented as the other of reason. This produces a dichotomy where art is either identified with a medium or a technology or is approached as a hermeneutical exercise that anesthetises its poetic modes of operation, thereby drifting towards visual communication. The thesis turns to complexity theory for an alternative ontological approach that can overcome the need for such metaphysical a priori structures. Indeed, complexity offers forms of coherence that install sense locally and heterogeneously, without the possibility of universalisation. This recasts aesthetics as a cohesive surface or genetic logic, rather than mere phenomenological appearance as the image of an object or the body of a concept. Thus, the thesis exhorts not to seek or think the ultimate, but to dwell in the finite pattern of possibility laid out by the radical irreducibility of the processes of emergence. In this light, the relation of concept and object can be re-thought as a continuum; a rhizomatic pattern of organisation that, however, no longer relies on the transcendental move adopted by Deleuze, or on Heidegger’s infamous leap out of metaphysics. In fact, the thesis shows that metaphysics is not the purveyor of dimensions, but is itself a dimension of thought. Hence, the move towards Prigogine, Stengers, Barad, and Golding in order to re-articulate the structure that supports sense as the local interference of continua, or ontological segments, rather than external coordinates. This radical materialism or dimensionality names a regime beyond transcendence and immanence where aesthetics is inseparable from ontology and offers a wholly different way to think and practice art - one best understood as diffraction.
Style APA, Harvard, Vancouver, ISO itp.
21

Gagliardi, Alessandro <1990&gt. "Dimensionality reduction methods for paleoclimate reconstructions". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10434.

Pełny tekst źródła
Streszczenie:
Paleoclimatology seeks to understand past changes in climate occurred before the instrumental period through paleoclimate archives. These archives consist of natural materials that keep trace of climate changes with different time scales and resolutions. Tree-ring archives are able to provide a timescale of thousands of years with annual resolution. This thesis discusses reconstruction of the past temperature in the period ranging from year 1400 until 1849 on the basis of the information available in a tree-ring dataset consisting of 70 trees located in the United States of America. The temperature data used for calibration and validation come from the HadCRUT4 dataset. The thesis considers past temperature reconstructions based on multiple linear regression models calibrated with instrumental temperature available for the period 1902-1980. Since the number of tree-ring proxies is large compared with the number of observations, standard multiple linear regression is unsuitable thus making necessary to apply dimensionality reduction methods such as principal component regression and partial least squares regression. The methodology developed in the thesis includes corrections to handle for residual serial dependence. The thesis results indicate that (i) key events of the climate forcings are well identified in the reconstructions based on both partial least squares and principal component regression but (ii) the method of partial least squares regression is superior in terms of precision of past temperature predictions.
Style APA, Harvard, Vancouver, ISO itp.
22

Chang, Hung Chao Mike. "Non-linearity and dimensionality in optical heating". Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/56177.

Pełny tekst źródła
Streszczenie:
One of the most important hurdles in electron-beam technologies such as thermionic energy conversion and parallel-beam lithography is having a high-performance electron source (cathode) material. Both of these applications, directly or indirectly, would benefit from a material’s ability to be heated efficiently through localized optical heating. Similarly, the main objective of thermoelectrics research is to maintain a high temperature gradient without hindering electrical conductivity, in order to increase the energy conversion efficiency. For this, many researchers have been pursuing the development of complex crystals with a host-and-rattling compound structure to reduce thermal conductivity Recently, localized heating with a temperature rise of a few thousand Kelvins has been induced by a low-power laser beam (< 50 mW) on the side-wall of a vertically-aligned carbon nanotube (CNT) forest. Given the excellent thermal conductivity of CNTs, such localized heating is very counterintuitive, and proper understanding of this phenomenon is necessary in order to use it for applications in thermionics and thermoelectrics. Here, an analytical formulation for solving the associated non-linear inhomogeneous heat problem through a Green’s function-based approach will be introduced. The application of this formulation to bulk metals, semiconductors, and different allotropes of carbon will be discussed. In particular, a systematic investigation will be presented on the effect of the material dimensionality and non-linear dependence of thermal conductivity on temperature. It will be shown that, if thermal conductivity is assumed to be constant, the peak temperature is proportional to the linear power density up to temperatures where radiative loss becomes significant. On the other hand, if the thermal conductivity falls with temperature, a significantly higher peak temperature and temperature gradient can be achieved. Furthermore, reducing the dimensionality of a material (going from a three-dimensional to a one-dimensional form) can lead to a significant peak temperature and temperature gradient.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
23

Lampen, Kelley Paula J. "Low Dimensionality Effects in Complex Magnetic Oxides". Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5874.

Pełny tekst źródła
Streszczenie:
Complex magnetic oxides represent a unique intersection of immense technological importance and fascinating physical phenomena originating from interwoven structural, electronic and magnetic degrees of freedom. The resulting energetically close competing orders can be controllably selected through external fields. Competing interactions and disorder represent an additional opportunity to systematically manipulate the properties of pure magnetic systems, leading to frustration, glassiness, and other novel phenomena while finite sample dimension plays a similar role in systems with long-range cooperative effects or large correlation lengths. A rigorous understanding of these effects in strongly correlated oxides is key to manipulating their functionality and device performance, but remains a challenging task. In this dissertation, we examine a number of problems related to intrinsic and extrinsic low dimensionality, disorder, and competing interactions in magnetic oxides by applying a unique combination of standard magnetometry techniques and unconventional magnetocaloric effect and transverse susceptibility measurements. The influence of dimensionality and disorder on the nature and critical properties of phase transitions in manganites is illustrated in La0.7Ca0.3MnO3, in which both size reduction to the nanoscale and chemically-controlled quenched disorder are observed to induce a progressive weakening of the first-order nature of the transition, despite acting through the distinct mechanisms of surface effects and site dilution. In the second-order material La0.8Ca0.2MnO3, a strong magnetic field is found to drive the system toward its tricritical point as competition between exchange interactions in the inhomogeneous ground state is suppressed. In the presence of large phase separation stabilized by chemical disorder and long-range strain, dimensionality has a profound effect. With the systematic reduction of particle size in microscale-phase-separated (La, Pr, Ca)MnO3 we observe a disruption of the long-range glassy strains associated with the charge-ordered phase in the bulk, lowering the field and pressure threshold for charge-order melting and increasing the ferromagnetic volume fraction as particle size is decreased. The long-range charge-ordered phase becomes completely suppressed when the particle size falls below 100 nm. In contrast, low dimensionality in the geometrically frustrated pseudo-1D spin chain compound Ca3Co2O6 is intrinsic, arising from the crystal lattice. We establish a comprehensive phase diagram for this exotic system consistent with recent reports of an incommensurate ground state and identify new sub-features of the ferrimagnetic phase. When defects in the form of grain boundaries are incorporated into the system the low-temperature slow-dynamic state is weakened, and new crossover phenomena emerge in the spin relaxation behavior along with an increased distribution of relaxation times. The presence of both disorder and randomness leads to a spin-glass-like state, as observed in γFe2O3 hollow nanoparticles, where freezing of surface spins at low temperature generates an irreversible magnetization component and an associated exchange-biasing effect. Our results point to distinct dynamic behaviors on the inner and outer surfaces of the hollow structures. Overall, these studies yield new physical insights into the role of dimensionality and disorder in these complex oxide systems and highlight the sensitivity of their manifested magnetic ground states to extrinsic factors, leading in many cases to crossover behaviors where the balance between competing phases is altered, or to the emergence of entirely new magnetic phenomena.
Style APA, Harvard, Vancouver, ISO itp.
24

Purdie, Stuart. "Magnetic ordering in systems of reduced dimensionality". Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/12927.

Pełny tekst źródła
Streszczenie:
The magnetic behaviour of thin films of (111) FCC structures and (0001) corundum structured materials were studied by the mean field analysis and some Monte Carlo simulation. These models were conditioned on a mapping from first principles calculations to the Ising model. The effect of the suggested octopolar reconstruction for the polar (111) surfaces of FCC was also examined.
Style APA, Harvard, Vancouver, ISO itp.
25

Remmert, Sarah M. "Reduced dimensionality quantum dynamics of chemical reactions". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:7f96405f-105c-4ca3-9b8a-06f77d84606a.

Pełny tekst źródła
Streszczenie:
In this thesis a reduced dimensionality quantum scattering model is applied to the study of polyatomic reactions of type X + CH4 <--> XH + CH3. Two dimensional quantum scattering of the symmetric hydrogen exchange reaction CH3+CH4 <--> CH4+CH3 is performed on an 18-parameter double-Morse analytical function derived from ab initio calculations at the CCSD(T)/cc-pVTZ//MP2/cc-pVTZ level of theory. Spectator mode motion is approximately treated via inclusion of curvilinear or rectilinear projected zero-point energies in the potential surface. The close-coupled equations are solved using R-matrix propagation. The state-to-state probabilities and integral and differential cross sections show the reaction to be primarily vibrationally adiabatic and backwards scattered. Quantum properties such as heavy-light-heavy oscillating reactivity and resonance features significantly influence the reaction dynamics. Deuterium substitution at the primary site is the dominant kinetic isotope effect. Thermal rate constants are in excellent agreement with experiment. The method is also applied to the study of electronically nonadiabatic transitions in the CH3 + HCl <--> CH4 + Cl(2PJ) reaction. Electrovibrational basis sets are used to construct the close-coupled equations, which are solved via Rmatrix propagation using a system of three potential energy surfaces coupled by spin-orbit interaction. Ground and excited electronic surfaces are developed using a 29-parameter double-Morse function with ab initio data at the CCSD(T)/ccpV( Q+d)Z-dk//MP2/cc-pV(T+d)Z-dk level of theory, and with basis set extrapolated data, both corrected via curvilinear projected spectator zero-point energies. Coupling surfaces are developed by fitting MCSCF/cc-pV(T+d)Z-dk ab initio spin orbit constants to 8-parameter functions. Scattering calculations are performed for the ground adiabatic and coupled surface models, and reaction probabilities, thermal rate constants and integral and differential cross sections are presented. Thermal rate constants on the basis set extrapolated surface are in excellent agreement with experiment. Characterisation of electronically nonadiabatic nonreactive and reactive transitions indicate the close correlation between vibrational excitation and nonadiabatic transition. A model for comparing the nonadiabatic cross section branching ratio to experiment is discussed.
Style APA, Harvard, Vancouver, ISO itp.
26

Li, Caihong. "THE SHORT GRIT SCALE: A DIMENSIONALITY ANALYSIS". UKnowledge, 2015. http://uknowledge.uky.edu/edp_etds/33.

Pełny tekst źródła
Streszczenie:
This study aimed to examine the internal structure, score reliability, scoring, and interpretation of the Short Grit Scale (Grit-S; Duckworth & Quinn, 2009) using a sample of engineering students (N = 610) from one large southeastern university located in the United States. Confirmatory factor analysis was used to compare four competing theoretical models: (a) a unidimensional model, (b) a two-factor model, (c) a second-order model, and (d) a bi-factor model. Given that researchers have used Grit-S as a single factor, a unidimensional model was examined. Two-factor and second-order models were considered based upon the work done by Duckworth, Peterson, Matthew, and Kelly (2007), and Duckworth and Quinn (2009). Finally, Reise, Morizot, and Hays (2007) have suggested a bi-factor model be considered when dealing with multidimensional scales given its ability to aid researches about the dimensionality and scoring of instruments consisting of heterogeneous item content. Findings from this study show that Grit-S was best represented by a bi-factor solution. Results indicate that the general grit factor possesses satisfactory score reliability and information, however, the results are not entirely clear or supportive of subscale scoring for either consistency of effort subscale or interest. The implications of these findings and future research are discussed.
Style APA, Harvard, Vancouver, ISO itp.
27

Musco, Christopher Paul. "Dimensionality reduction for sparse and structured matrices". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99856.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-103).
Dimensionality reduction has become a critical tool for quickly solving massive matrix problems. Especially in modern data analysis and machine learning applications, an overabundance of data features or examples can make it impossible to apply standard algorithms efficiently. To address this issue, it is often possible to distill data to a much smaller set of informative features or examples, which can be used to obtain provably accurate approximate solutions to a variety of problems In this thesis, we focus on the important case of dimensionality reduction for sparse and structured data. In contrast to popular structure-agnostic methods like Johnson-Lindenstrauss projection and PCA, we seek data compression techniques that take advantage of structure to generate smaller or more powerful compressions. Additionally, we aim for methods that can be applied extremely quickly - typically in linear or nearly linear time in the input size. Specifically, we introduce new randomized algorithms for structured dimensionality reduction that are based on importance sampling and sparse-recovery techniques. Our work applies directly to accelerating linear regression and graph sparsification and we discuss connections and possible extensions to low-rank approximation, k-means clustering, and several other ubiquitous matrix problems.
by Christopher Paul Musco.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
28

Brewer, Matthew S. "Magnetic interactions in systems with reduced dimensionality". Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/61712/.

Pełny tekst źródła
Streszczenie:
Detailed knowledge of the interaction of magnetic moments is key to developing the next generation of magnetic devices. Systems with induced moments provide an ideal regime in which to study this fundamental behaviour. Resonant x-ray scattering and polarised neutron reflectivity are complementarily used to map induced moment profiles in continuous FeZr/CoZr multilayer films and both continuous and patterned Pd/Fe/Pd trilayer films. Resonant scattering is additionally employed to measure the dimensionality of the magnetic lattice through observations of the magnetic ordering behaviour. The shape and extent of the induced profiles was resolved with unprecedented accuracy, and was found to conform to the theoretical expectation: all profiles decayed exponentially from the inducing material, with an extent in the nm regime. Adjacent magnetic lattices were found to interact only through the magnitude of their moments, acting through the magnetic susceptibility of the induced material. The dimensionality of adjacent magnetic lattices were therefore found to be independent. Additionally, a significant, and unexpected, observation was made of the thinnest Fe layers studied: the Fe moment was seen to vanish though a pronounced induced moment remained in the neighbouring Pd. The definitive cause of this unusual behaviour has yet to be discovered. In the patterned materials, the interaction between adjacent islands was found to contribute minimally to the overall behaviour in the geometry studied. The energy cost of rotating the moments within an individual island was the dominating contribution to the magnetic ordering behavior. These results provide new insight into the coupling mechanisms between adjacent moments, while revealing new complexities that provide the foundation for further study.
Style APA, Harvard, Vancouver, ISO itp.
29

Mares, Mihaela Andreea. "Variable selection in the curse of dimensionality". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45463.

Pełny tekst źródła
Streszczenie:
High-throughput technologies nowadays are leading to massive availability of data to be explored. Therefore, we are keen to build mathematical and statistical meth- ods for extracting as much value from the available data as possible. However, the large dimensionality in terms of both sample size and number of features or variables poses new challenges. The large number of samples can be tackled more easily by increasing computational power and making use of distributed computation tech- nologies. The large number of features or variables poses the risk of explaining variation in both noise and signal with the wrong explanatory variables. One ap- proach to overcome this problem is to select a smaller set of features from the initial set which are most relevant given an assumed prediction model. This approach is called variable or feature selection and implies using a bias or statistical assumption about which features should be considered more relevant. Different feature selection are using different statistical assumptions about the mathematical relation between predicted and explanatory variables and about which explanatory variables should be considered more relevant. Our first contribution in this thesis is to combine the strength of different variables selection methods relying on different statistical assumptions. We start by classifying existing feature selection methods based on their assumptions and assessing their capacity of scaling for high-dimensional data, particularly when the number of samples is much smaller than the number of fea- tures. We propose a new algorithm consisting of combining results from different feature selection methods relying on disjoint assumptions about the function that generated the data and we show that our method will lead to better sensitivity than using each method individually. The assumption of a linear relationship between the predicted variable and the explanatory variables is one of the most widely used simplifying assumptions. Our second contribution is to prove that at least one fea- ture selection algorithm based on the linearity assumption is consistent even when the underlying function that generated the data is not necessarily linear. Based on these theoretical findings we propose a new algorithm which provides better results when the underlying function that generated the data is at most partially linear. Neural networks and in particular deep learning architectures have been shown to be able to fit highly non-linear prediction models when given sufficient training ex- amples. However, they do not embed feature selection mechanisms. We contribute by assessing the performance of these models when given a large number of features and less samples, proposing a method for feature selection and showing in which circumstances combining this feature selection method with deep learning architec- tures will outperform not using feature selection. Several feature selection methods as well as the new methods we have proposed in this thesis rely on re-sampling techniques or using different algorithms for the same dataset. Their advantage is partially gained by using extra computational power. Therefore, our last contribu- tion consists of an efficient data distribution and load balanced parallel calculation for re-sampling based algorithms.
Style APA, Harvard, Vancouver, ISO itp.
30

Milsom, Philip Keith. "Electron transport in systems of reduced dimensionality". Thesis, University of Warwick, 1987. http://wrap.warwick.ac.uk/106997/.

Pełny tekst źródła
Streszczenie:
The Boltzmann equation is modified to examine the effects of a range of scattering mechanisms on the DC conductivity of semiconductor material in the form of thin sheets and fine wires. This is solved exactly for elastic scattering mechanisms by introducing a set of momentum relaxation times which are relevant to the occupied sub-bands. These times are calculated for alloy scattering, surface roughness scattering and the acoustic phonon mechanisms at high temperatures. At low temperatures the inelasticity of the acoustic phonon mechanisms is taken into account and a variational method is employed. At very low temperatures we show that the acoustic deformation potential gives rise to a mobility which varies at T-5. We use an iterative method to examine the strongly inelastic polar optic phonon scattering mechanism in a wire. Ridley has suggested that the momentum relaxation time may be negative in this system. We introduce a time relevant to transport measurements and this is found to be positive. It is shown that the time derived by Ridley may be of relevance to time resolved transport measurements.
Style APA, Harvard, Vancouver, ISO itp.
31

Beach, David J. "Anomaly Detection with Advanced Nonlinear Dimensionality Reduction". Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1378.

Pełny tekst źródła
Streszczenie:
Dimensionality reduction techniques such as t-SNE and UMAP are useful both for overview of high-dimensional datasets and as part of a machine learning pipeline. These techniques create a non-parametric model of the manifold by fitting a density kernel about each data point using the distances to its k-nearest neighbors. In dense regions, this approach works well, but in sparse regions, it tends to draw unrelated points into the nearest cluster. Our work focuses on a homotopy method which imposes graph-based regularization over the manifold parameters to update the embedding. As the homotopy parameter increases, so does the cost of modeling different scales between adjacent neighborhoods. This gradually imposes a more uniform scale over the manifold, resulting in a more faithful embedding which preserves structure in dense areas while pushing sparse anomalous points outward.
Style APA, Harvard, Vancouver, ISO itp.
32

Brinckerhoff, William B. "Dimensionality and disorder in molecule-based magnets". The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343143081.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

DWIVEDI, SAURABH. "DIMENSIONALITY REDUCTION FOR DATA DRIVEN PROCESS MODELING". University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1069770129.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

XU, NUO. "AGGRESSIVE DIMENSIONALITY REDUCTION FOR DATA-DRIVEN MODELING". University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1178640357.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Welshman, Christopher. "Dimensionality reduction for dynamical systems with parameters". Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/dimensionality-reduction-for-dynamical-systems-with-parameters(69dab7de-b1dd-4d74-901f-61e02decf16a).html.

Pełny tekst źródła
Streszczenie:
Dimensionality reduction methods allow for the study of high-dimensional systems by producing low-dimensional descriptions that preserve the relevant structure and features of interest. For dynamical systems, attractors are particularly important examples of such features, as they govern the long-term dynamics of the system, and are typically low-dimensional even if the state space is high- or infinite-dimensional. Methods for reduction need to be able to determine a suitable reduced state space in which to describe the attractor, and to produce a reduced description of the corresponding dynamics. In the presence of a parameter space, a system can possess a family of attractors. Parameters are important quantities that represent aspects of the physical system not directly modelled in the dynamics, and may take different values in different instances of the system. Therefore, including the parameter dependence in the reduced system is desirable, in order to capture the model's full range of behaviour. Existing methods typically involve algebraically manipulating the original differential equation, either by applying a projection, or by making local approximations around a fixed-point. In this work, we take more of a geometric approach, both for the reduction process and for determining the dynamics in the reduced space. For the reduction, we make use of an existing secant-based projection method, which has properties that make it well-suited to the reduction of attractors. We also regard the system to be a manifold and vector field, consider the attractor's normal and tangent spaces, and the derivatives of the vector field, in order to determine the desired properties of the reduced system. We introduce a secant culling procedure that allows for the number of secants to be greatly reduced in the case that the generating set explores a low-dimensional space. This reduces the computational cost of the secant-based method without sacrificing the detail captured in the data set. This makes it feasible to use secant-based methods with larger examples. We investigate a geometric formulation of the problem of dimensionality reduction of attractors, and identify and resolve the complications that arise. The benefit of this approach is that it is compatible with a wider range of examples than conventional approaches, particularly those with angular state variables. In turn this allows for application to non-autonomous systems with periodic time-dependence. We also adapt secant-based projection for use in this more general setting, which provides a concrete method of reduction. We then extend the geometric approach to include a parameter space, resulting in a family of vector fields and a corresponding family of attractors. Both the secant-based projection and the reproduction of dynamics are extended to produce a reduced model that correctly responds to the parameter dependence. The method is compatible with multiple parameters within a given region of parameter space. This is illustrated by a variety of examples.
Style APA, Harvard, Vancouver, ISO itp.
36

Chang, Kui-yu. "Nonlinear dimensionality reduction using probabilistic principal surfaces /". Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

WANG, Xinguang. "The Dimensionality and Control of Human Walking". Thesis, The University of Sydney, 2012. http://hdl.handle.net/2123/8945.

Pełny tekst źródła
Streszczenie:
The aim of the work presented in this thesis was to investigate the control mechanism of human walking. From motor control theory, a motor synergy has two main features, sharing and error compensation (Latash, 2008). Therefore, this thesis focused on these two aspects of the mechanism by investigating: the coupling and correlations between the joint angles, and the variability due to the compensation of “errors” during walking. Thus, a more complete picture of walking in terms of coordination and control would be drawn. In order to evaluate the correlations between joint angles and detect the dimensionality of human walking, a new approach was developed as presented in Chapter 3 that overcame an important limitation of current methods for assessing the dimensionality of data sets. In Chapter 4, this new method is applied to 40 whole body joint angles to detect the coordinative structure of walking. Chapters 5 and 6 focus on between-subject and within-subject kinematic variability of walking, respectively, and investigate the effects of gender and speed on variability. The findings on walking variability inspired us to further determine the relationships between joint angles and walking speed, the results of which are shown in Chapter 7. A summary of each individual study is presented in the following text. Chapter 3 Principal components analysis is a powerful and popular technique for the decomposition of muscle activity and kinematic patterns into independent modular v components or synergies. The analysis is based on a matrix of either correlations or covariances between all pairs of signals in the data set. A primary limitation of such matrices is that they do not account for dynamic relations between signals - characterised by phase differences or frequency-dependent variations in amplitude ratio - yet such relations are widespread in the sensorimotor system. Low correlations may thus be obtained and signals may appear ‘independent’ despite a dynamic linear relation between them. To address this limitation, the matrix of overall coherence values between signal pairs may be used. Overall coherence can be calculated using linear systems analysis and provides a measure of the strength of the relationship between signals taking both phase differences and frequency-dependent variation in amplitude ratio into account. Using the ankle, knee and hip sagittal-plane angles from six healthy subjects during over-ground walking at preferred speed, it is shown that with conventional correlation matrices the first principal component accounted for ~ 50% of total variance in the data set, while with overall coherence matrices the first component accounted for > 95% of total variance. The results demonstrate that the dimensionality of the coordinative structure can be overestimated using conventional correlation, whereas with overall coherence a more parsimonious structure is identified. Overall coherence can enhance the power of principal components analysis in capturing redundancy in human motor output. Chapter 4 The control of human movement is simplified by organising actions into linkages or couplings between body segments known as ‘synergies’. Many studies have vi supported the existence of ‘synergies’ during human walking and demonstrated that multi-segmental movements are highly coupled and correlated. Since correlations in the movements between body segments can be used to understand the control of walking by identifying synergies, the nature of the coordinative structure of walking was investigated. Principal components analysis uses information about the relationship between segments in movement and can identify independent synergies. A dynamic linear systems analysis was employed to compute the overall coherence between the movements of body segments. This is a measure of the strength of the relationship between movements where both amplitude and phase differences in the movements can be accounted for. In contrast, the Pearson moment product correlation coefficient only accounts for amplitude differences in the movements. Therefore, overall coherence was assumed to be a better estimate of the true relationship between segments. The present study investigated whole body movement in terms of 40 joint angles during normal walking. Principal components analysis showed that one synergy (component) could cumulatively account for over 86% of total variance when applying overall coherence, while seven components were required when using Pearson correlation coefficient. The findings suggested that the relationships between joint angles are more complex than the simple linear relations described by Pearson correlation coefficient. When the dynamic linear relation was considered, a higher correlation between joint angles and greater reduction of degree of freedom could be obtained. The coordinative structure of human walking could therefore be low dimensional and even simply explained by a single component. An additional degree of freedom could be required to perform an vii additional voluntary task during walking by superimposing the voluntary task control signal on the basic walking motor control program. Chapter 5 Walking is a complex task which requires coordinated movement of many body segments. As a practised motor skill, walking has a low level of variability. Information regarding the variability of walking can provide valuable insight into control mechanisms and locomotor deficits. Most previous studies have assessed the stride-to-stride walking variability within subjects; little information is available for between-subject variability, especially for whole body movement. This information could provide an indication of how similar the control mechanism is between subjects during walking. Forty joint angles from the whole body were recorded using a motion analysis system in 22 healthy subjects at four walking speeds. The between-subject variability of the waveform patterns of the joint angles was evaluated using the amplitude of the mean kinematic pattern (MP) and the standard deviation of the pattern (SDP) for each angle. Regression analyses of SDP onto MP showed that at each walking speed, SDP across subjects increased with MP at a similar rate for all angles except the hip and knee in the sagittal plane. This may indicate a different control mechanism for hip and knee sagittal-plane movements which had a lower ‘signal to noise’ ratio that all other angles. A strong linear relationship was observed between SDP and MP for all joint angles. The variability between male subjects was comparable to the variability between female subjects. A trend of decreasing slopes of the regression lines with walking speed was observed with fast walking showing least variability, possibly reflecting higher angular viii accelerations producing a greater ‘tightening’ of the joints compared to slow walking, so that the rate of increase of waveform variability with increased waveform magnitude is reduced. The existence of an intercept other than zero in the SDP - MP relations suggested that the coefficient of variation should be used carefully when quantifying kinematic walking variability, because it may contain sources of variability independent of the mean amplitude of the angles. Chapter 6 Although most previous studies of walking variability have examined within-subject variability, little information is available for the variability of the whole body. This study measured the within-subject variability of both upper and lower body joint angles to increase the understanding of the mechanism of whole body movement. Whereas the between-subject variability was investigated in chapter 5, the within-subject variability of the waveform patterns of the joint angles was evaluated here, again using the amplitude of the mean kinematic pattern (MP) and the standard deviation of the pattern (SDP) for each angle. The within-subject variability was clearly less than the between-subject variability reported in Chapter 5, showing as would be expected that the repeatability of joint motion was greater within than across individuals. The results again showed that hip and knee flexion-extension demonstrated a consistently lower variability compared to all other joint angles. Comparison of males and females showed that the repeatability of joint motion was lower in females, this difference being mostly centred around the angles of the foot. The within-subject variability showed a quadratic relationship with walking speed, with minimum variability at preferred speed. Analysis of the regressions between ix SDP and MP of the joint angles also showed significant differences between females and males, with females showing a higher slope of the SDP and MP relation. As was the case for between-subject variability, the slopes of the SDP vs MP regression lines again decreased with walking speed for within-subject variability. Chapter 7 The relationship between walking parameters and speed has been widely investigated but most studies have investigated only a few joint angles and little has been reported about the relationship between the kinematics of the upper body and walking speed. In this study the relationship between walking speed and the range of the joint angles was evaluated. Linear correlations with walking speed were observed in both upper and lower body joint angles. Different mechanisms may be applied by the upper and lower limbs in relation to changes in walking speed. While hip and knee flexion-extension were found to play the most important role in changing walking speed, changes of large magnitude associated with walking speed occurred at the shoulder, elbow and trunk, apparently the result of changes in balance requirements and to help stabilise the body motion.
Style APA, Harvard, Vancouver, ISO itp.
38

Baldiwala, Aliakbar. "Dimensionality Reduction for Commercial Vehicle Fleet Monitoring". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38330.

Pełny tekst źródła
Streszczenie:
A variety of new features have been added in the present-day vehicles like a pre-crash warning, the vehicle to vehicle communication, semi-autonomous driving systems, telematics, drive by wire. They demand very high bandwidth from in-vehicle networks. Various electronic control units present inside the automotive transmit useful information via automotive multiplexing. Automotive multiplexing allows sharing information among various intelligent modules inside an automotive electronic system. Optimum functionality is achieved by transmitting this data in real time. The high bandwidth and high-speed requirement can be achieved either by using multiple buses or by implementing higher bandwidth. But, by doing so the cost of the network and the complexity of the wiring in the vehicle increases. Another option is to implement higher layer protocol which can reduce the amount of data transferred by using data reduction (DR) techniques, thus reducing the bandwidth usage. The implementation cost is minimal as only the changes are required in the software and not in hardware. In our work, we present a new data reduction algorithm termed as “Comprehensive Data Reduction (CDR)” algorithm. The proposed algorithm is used for minimization of the bus utilization of CAN bus for a future vehicle. The reduction in the busload was efficiently made by compressing the parameters; thus, more number of messages and lower priority messages can be efficiently sent on the CAN bus. The proposed work also presents a performance analysis of proposed algorithm with the boundary of fifteen compression algorithm, and Compression area selection algorithms (Existing Data Reduction Algorithm). The results of the analysis show that proposed CDR algorithm provides better data reduction compared to earlier proposed algorithms. The promising results were obtained in terms of reduction in bus utilization, compression efficiency, and percent peak load of CAN bus. This Reduction in the bus utilization permits to utilize a larger number of network nodes (ECU’s) in the existing system without increasing the overall cost of the system. The proposed algorithm has been developed for automotive environment, but it can also be utilized in any applications where extensive information transmission among various control units is carried out via a multiplexing bus.
Style APA, Harvard, Vancouver, ISO itp.
39

Tosi, Alessandra. "Visualization and interpretability in probabilistic dimensionality reduction models". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/285013.

Pełny tekst źródła
Streszczenie:
Over the last few decades, data analysis has swiftly evolved from being a task addressed mainly within the remit of multivariate statistics, to an endevour in which data heterogeneity, complexity and even sheer size, driven by computational advances, call for alternative strategies, such as those provided by pattern recognition and machine learning. Any data analysis process aims to extract new knowledge from data. Knowledge extraction is not a trivial task and it is not limited to the generation of data models or the recognition of patterns. The use of machine learning techniques for multivariate data analysis should in fact aim to achieve a dual target: interpretability and good performance. At best, both aspects of this target should not conflict with each other. This gap between data modelling and knowledge extraction must be acknowledged, in the sense that we can only extract knowledge from models through a process of interpretation. Exploratory information visualization is becoming a very promising tool for interpretation. When exploring multivariate data through visualization, high data dimensionality can be a big constraint, and the use of dimensionality reduction techniques is often compulsory. The need to find flexible methods for data modelling has led to the development of non-linear dimensionality reduction techniques, and many state-of-the-art approaches of this type fall in the domain of probabilistic modelling. These non-linear techniques can provide a flexible data representation and a more faithful model of the observed data compared to the linear ones, but often at the expense of model interpretability, which has an impact in the model visualization results. In manifold learning non-linear dimensionality reduction methods, when a high-dimensional space is mapped onto a lower-dimensional one, the obtained embedded manifold is subject to local geometrical distortion induced by the non-linear mapping. This kind of distortion can often lead to misinterpretations of the data set structure and of the obtained patterns. It is important to give relevance to the problem of how to quantify and visualize the distortion itself in order to interpret data in a more faithful way. The research reported in this thesis focuses on the development of methods and techniques for explicitly reintroducing the local distortion created by non-linear dimensionality reduction models into the low-dimensional visualization of the data that they produce, as well as in the definition of metrics for probabilistic geometries to address this problem. We do not only provide methods only for static data, but also for multivariate time series. The reintegration of the quantified non-linear distortion into the visualization space of the analysed non-linear dimensionality reduction methods is a goal by itself, but we go beyond it and consider alternative adequate metrics for probabilistic manifold learning. For that, we study the role of \textit{Random geometries}, that is, distributions of manifolds, in machine learning and data analysis in general. Methods for the estimation of distributions of data-supporting Riemannian manifolds as well as algorithms for computing interpolants over distributions of manifolds are defined. Experimental results show that inference made according to the random Riemannian metric leads to a more faithful generation of unobserved data.
Durant les últimes dècades, l’anàlisi de dades ha evolucionat ràpidament de ser una tasca dirigida principalment dins de l’àmbit de l’estadística multivariant, a un endevour en el qual l’heterogeneïtat de les dades, la complexitat i la simple grandària, impulsats pels avanços computacionals, exigeixen estratègies alternatives, tals com les previstes en el Reconeixement de Formes i l’Aprenentatge Automàtic. Qualsevol procés d’anàlisi de dades té com a objectiu extreure nou coneixement a partir de les dades. L’extracció de coneixement no és una tasca trivial i no es limita a la generació de models de dades o el reconeixement de patrons. L’ús de tècniques d’aprenentatge automàtic per a l’anàlisi de dades multivariades, de fet, hauria de tractar d’aconseguir un objectiu doble: la interpretabilitat i un bon rendiment. En el millor dels casos els dos aspectes d’aquest objectiu no han d’entrar en conflicte entre sí. S’ha de reconèixer la bretxa entre el modelatge de dades i l’extracció de coneixement, en el sentit que només podem extreure coneixement a partir dels models a través d’un procés d’interpretació. L’exploració de la visualització d’informació s’està convertint en una eina molt prometedora per a la interpretació dels models. Quan s’exploren les dades multivariades a través de la visualització, la gran dimensionalitat de les dades pot ser un obstacle, i moltes vegades és obligatori l’ús de tècniques de reducció de dimensionalitat. La necessitat de trobar mètodes flexibles per al modelatge de dades ha portat al desenvolupament de tècniques de reducció de dimensionalitat no lineals. L’estat de l’art d’aquests enfocaments cau moltes vegades en el domini de la modelització probabilística. Aquestes tècniques no lineals poden proporcionar una representació de les dades flexible i un model de les dades més fidel comparades amb els models lineals, però moltes vegades a costa de la interpretabilitat del model, que té un impacte en els resultats de visualització. En els mètodes d’aprenentatge de varietats amb reducció de dimensionalitat no lineals, quan un espai d’alta dimensió es projecta sobre un altre de dimensió menor, la varietat immersa obtinguda està subjecta a una distorsió geomètrica local induïda per la funció no lineal. Aquest tipus de distorsió pot conduir a interpretacions errònies de l’estructura del conjunt de dades i dels patrons obtinguts. Per això, és important donar rellevància al problema de com quantificar i visualitzar aquesta distorsió en sí, amb la finalitat d’interpretar les dades d’una manera més fidel. La recerca presentada en aquesta tesi se centra en el desenvolupament de mètodes i tècniques per reintroduir de forma explícita a l’espai de visualització la distorsió local creada per la funció no lineal. Aquesta recerca se centra també en la definició de mètriques per a geometries probabilístiques per fer front al problema de la distorsió de la funció en els models de reducció de dimensionalitat no lineals. No proporcionem mètodes només per a les dades estàtiques, sinó també per a sèries temporals multivariades. La reintegració de la distorsió no lineal a l’espai de visualització dels mètodes de reducció de dimensionalitat no lineals analitzats és un objectiu en sí mateix, però aquesta anàlisi va més enllà i considera també les mètriques probabilístiques adequades a l’aprenentatge de varietats probabilístiques. Per això, estudiem el paper de les Geometries Aleatòries (distribucions de les varietats) en Aprenentatge Automàtic i anàlisi de dades en general. Es defineixen aquí els mètodes per a l’estimació de les distribucions de varietats de Riemann de suport a les dades, així com els algorismes per calcular interpolants en les distribucions de varietats. Els resultats experimentals mostren que la inferència feta segons les mètriques de les varietats Riemannianes Aleatòries dóna origen a una generació de les dades observades més fidel
Durant les últimes dècades, l'anàlisi de dades ha evolucionat ràpidament de ser una tasca dirigida principalment dins de l'àmbit de l'estadística multivariant, a un endevour en el qual l'heterogeneïtat de les dades, la complexitat i la simple grandària, impulsats pels avanços computacionals, exigeixen estratègies alternatives, tals com les previstes en el Reconeixement de Formes i l'Aprenentatge Automàtic. La recerca presentada en aquesta tesi se centra en el desenvolupament de mètodes i tècniques per reintroduir de forma explícita a l'espai de visualització la distorsió local creada per la funció no lineal. Aquesta recerca se centra també en la definició de mètriques per a geometries probabilístiques per fer front al problema de la distorsió de la funció en els models de reducció de dimensionalitat no lineals. No proporcionem mètodes només per a les dades estàtiques, sinó també per a sèries temporals multivariades. La reintegració de la distorsió no lineal a l'espai de visualització dels mètodes de reducció de dimensionalitat no lineals analitzats és un objectiu en sí mateix, però aquesta anàlisi va més enllà i considera també les mètriques probabilístiques adequades a l'aprenentatge de varietats probabilístiques. Per això, estudiem el paper de les Geometries Aleatòries (distribucions de les varietats) en Aprenentatge Automàtic i anàlisi de dades en general. Es defineixen aquí els mètodes per a l'estimació de les distribucions de varietats de Riemann de suport a les dades, així com els algorismes per calcular interpolants en les distribucions de varietats. Els resultats experimentals mostren que la inferència feta segons les mètriques de les varietats Riemannianes Aleatòries dóna origen a una generació de les dades observades més fidel. Qualsevol procés d'anàlisi de dades té com a objectiu extreure nou coneixement a partir de les dades. L'extracció de coneixement no és una tasca trivial i no es limita a la generació de models de dades o el reconeixement de patrons. L'ús de tècniques d'aprenentatge automàtic per a l'anàlisi de dades multivariades, de fet, hauria de tractar d'aconseguir un objectiu doble: la interpretabilitat i un bon rendiment. En el millor dels casos els dos aspectes d'aquest objectiu no han d'entrar en conflicte entre sí. S'ha de reconèixer la bretxa entre el modelatge de dades i l'extracció de coneixement, en el sentit que només podem extreure coneixement a partir dels models a través d'un procés d'interpretació. L'exploració de la visualització d'informació s'està convertint en una eina molt prometedora per a la interpretació dels models. Quan s'exploren les dades multivariades a través de la visualització, la gran dimensionalitat de les dades pot ser un obstacle, i moltes vegades és obligatori l'ús de tècniques de reducció de dimensionalitat. La necessitat de trobar mètodes flexibles per al modelatge de dades ha portat al desenvolupament de tècniques de reducció de dimensionalitat no lineals. L'estat de l'art d'aquests enfocaments cau moltes vegades en el domini de la modelització probabilística. Aquestes tècniques no lineals poden proporcionar una representació de les dades flexible i un model de les dades més fidel comparades amb els models lineals, però moltes vegades a costa de la interpretabilitat del model, que té un impacte en els resultats de visualització. En els mètodes d'aprenentatge de varietats amb reducció de dimensionalitat no lineals, quan un espai d'alta dimensió es projecta sobre un altre de dimensió menor, la varietat immersa obtinguda està subjecta a una distorsió geomètrica local induïda per la funció no lineal. Aquest tipus de distorsió pot conduir a interpretacions errònies de l'estructura del conjunt de dades i dels patrons obtinguts. Per això, és important donar rellevància al problema de com quantificar i visualitzar aquesta distorsió en sì, amb la finalitat d'interpretar les dades d'una manera més fidel.
Style APA, Harvard, Vancouver, ISO itp.
40

Guo, Hong. "Feature generation and dimensionality reduction using genetic programming". Thesis, University of Liverpool, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Kalamaras, Ilias. "A novel approach for multimodal graph dimensionality reduction". Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/42224.

Pełny tekst źródła
Streszczenie:
This thesis deals with the problem of multimodal dimensionality reduction (DR), which arises when the input objects, to be mapped on a low-dimensional space, consist of multiple vectorial representations, instead of a single one. Herein, the problem is addressed in two alternative manners. One is based on the traditional notion of modality fusion, but using a novel approach to determine the fusion weights. In order to optimally fuse the modalities, the known graph embedding DR framework is extended to multiple modalities by considering a weighted sum of the involved affinity matrices. The weights of the sum are automatically calculated by minimizing an introduced notion of inconsistency of the resulting multimodal affinity matrix. The other manner for dealing with the problem is an approach to consider all modalities simultaneously, without fusing them, which has the advantage of minimal information loss due to fusion. In order to avoid fusion, the problem is viewed as a multi-objective optimization problem. The multiple objective functions are defined based on graph representations of the data, so that their individual minimization leads to dimensionality reduction for each modality separately. The aim is to combine the multiple modalities without the need to assign importance weights to them, or at least postpone such an assignment as a last step. The proposed approaches were experimentally tested in mapping multimedia data on low-dimensional spaces for purposes of visualization, classification and clustering. The no-fusion approach, namely Multi-objective DR, was able to discover mappings revealing the structure of all modalities simultaneously, which cannot be discovered by weight-based fusion methods. However, it results in a set of optimal trade-offs, from which one needs to be selected, which is not trivial. The optimal-fusion approach, namely Multimodal Graph Embedding DR, is able to easily extend unimodal DR methods to multiple modalities, but depends on the limitations of the unimodal DR method used. Both the no-fusion and the optimal-fusion approaches were compared to state-of-the-art multimodal dimensionality reduction methods and the comparison showed performance improvement in visualization, classification and clustering tasks. The proposed approaches were also evaluated for different types of problems and data, in two diverse application fields, a visual-accessibility-enhanced search engine and a visualization tool for mobile network security data. The results verified their applicability in different domains and suggested promising directions for future advancements.
Style APA, Harvard, Vancouver, ISO itp.
42

Le, Moan Steven. "Dimensionality reduction and saliency for spectral image visualization". Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00825495.

Pełny tekst źródła
Streszczenie:
Nowadays, digital imaging is mostly based on the paradigm that a combinations of a small number of so-called primary colors is sufficient to represent any visible color. For instance, most cameras usepixels with three dimensions: Red, Green and Blue (RGB). Such low dimensional technology suffers from several limitations such as a sensitivity to metamerism and a bounded range of wavelengths. Spectral imaging technologies offer the possibility to overcome these downsides by dealing more finely withe the electromagnetic spectrum. Mutli-, hyper- or ultra-spectral images contain a large number of channels, depicting specific ranges of wavelength, thus allowing to better recover either the radiance of reflectance of the scene. Nevertheless,these large amounts of data require dedicated methods to be properly handled in a variety of applications. This work contributes to defining what is the useful information that must be retained for visualization on a low-dimensional display device. In this context, subjective notions such as appeal and naturalness are to be taken intoaccount, together with objective measures of informative content and dependency. Especially, a novel band selection strategy based on measures derived from Shannon's entropy is presented and the concept of spectral saliency is introduced.
Style APA, Harvard, Vancouver, ISO itp.
43

Jumah, Bander K. "Dimensionality-reduced estimation of primaries by sparse inversion". Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/40723.

Pełny tekst źródła
Streszczenie:
Data-driven methods—such as the estimation of primaries by sparse inversion suffer from the 'curse of dimensionality’ that leads to disproportional growth in computational and storage demands when moving to realistic 3D field data. To remove this fundamental impediment, we propose a dimensionality-reduction technique where the 'data matrix' is approximated adaptively by a randomized low-rank factorization. Compared to conventional methods, which need passes through all the data possibly including on-the-fly interpolations for each iteration, our approach has the advantage that the passes are reduced to one to three. In addition, the low-rank matrix factorization leads to considerable reductions in storage and computational costs of the matrix multiplies required by the sparse inversion. Application of the proposed formalism to synthetic and real data shows that significant performance improvements in speed and memory use are achievable at a low computational overhead required by the low-rank factorization.
Style APA, Harvard, Vancouver, ISO itp.
44

Kuksin, Nikita Sergei. "General equilibrium : dynamics and dimensionality of an economy". Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/2075.

Pełny tekst źródła
Streszczenie:
Traditional work on economic dynamics (such as, for example, growth theory and real business cycles) postulates as a starting point the existence of a set of phase variables, whose values fully characterise the economy at any point in time. Despite relying on a general equilibrium framework, such approaches do not justify this assumption in terms of the underlying theory, thereby failing to link economic dynamics with fundamental static principles. This thesis aims to suggest a remedy by introducing dynamics explicitly into Debreu's essentially static framework. This situation can be modelled in a certain well-defined sense. The suggested approach is novel to the economics literature, yet it preserves the fundamental notion of excess demand functions as the driving force behind trade, consumption and production processes. The formulated model yields a system of partial differential equations. For our purposes the most important aspect of this system is that despite its infinitedimensional phase space, we can show that conditions imposed by the economic nature of the underlying problem imply the existence of a finite-dimensional global attractor. In turn, the essential property of a finite-dimensional global attractor is the fact that it can be parameterised using a finite number of variables. These need not have been expliciljly present in the original equations, and therefore are not directly related to goods produced, consumed, and traded. In other words, it is shown that operations of free markets as postulated by Debreu imply the existence of a finite number of phase coordinates that characterise the economy at any point in time, as postulated by existing work on economic growth, business cycles, learning, etc.
Style APA, Harvard, Vancouver, ISO itp.
45

Bitzer, Sebastian. "Nonlinear dimensionality reduction for motion synthesis and control". Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/4869.

Pełny tekst źródła
Streszczenie:
Synthesising motion of human character animations or humanoid robots is vastly complicated by the large number of degrees of freedom in their kinematics. Control spaces become so large, that automated methods designed to adaptively generate movements become computationally infeasible or fail to find acceptable solutions. In this thesis we investigate how demonstrations of previously successful movements can be used to inform the production of new movements that are adapted to new situations. In particular, we evaluate the use of nonlinear dimensionality reduction techniques to find compact representations of demonstrations, and investigate how these can simplify the synthesis of new movements. Our focus lies on the Gaussian Process Latent Variable Model (GPLVM), because it has proven to capture the nonlinearities present in the kinematics of robots and humans. We present an in-depth analysis of the underlying theory which results in an alternative approach to initialise the GPLVM based on Multidimensional Scaling. We show that the new initialisation is better suited than PCA for nonlinear, synthetic data, but have to note that its advantage shrinks on motion data. Subsequently we show that the incorporation of additional structure constraints leads to low-dimensional representations which are sufficiently regular so that once learned dynamic movement primitives can be adapted to new situations without need for relearning. Finally, we demonstrate in a number of experiments where movements are generated for bimanual reaching, that, through the use of nonlinear dimensionality reduction, reinforcement learning can be scaled up to optimise humanoid movements.
Style APA, Harvard, Vancouver, ISO itp.
46

Ross, Ian. "Nonlinear dimensionality reduction methods in climate data analysis". Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492479.

Pełny tekst źródła
Streszczenie:
Linear dimensionality reduction techniques, notably principal component analysis, are widely used in climate data analysis as a means to aid in the interpretation of datasets of high dimensionality. These hnear methods may not be appropriate for the analysis of data arising from nonlinear processes occurring in the climate system. Numerous techniques for nonlinear dimensionality reduction have been developed recently that may provide a potentially useful tool for the identification of low-dimensional manifolds in climate data sets arising from nonlinear dynamics. In this thesis I apply three such techniques to the study of El Niño/Southern Oscillation variability in tropical Pacific sea surface temperatures and thermocline depth, comparing observational data with simulations from coupled atmosphere-ocean general circulation models from the CMIP3 multi-model ensemble.
Style APA, Harvard, Vancouver, ISO itp.
47

Winiger, Joakim. "Estimating the intrinsic dimensionality of high dimensional data". Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-163170.

Pełny tekst źródła
Streszczenie:
This report presents a review of some methods for estimating what is known as intrinsic dimensionality (ID). The principle behind intrinsic dimensionality estimation is that frequently, it is possible to find some structure in data which makes it possible to re-express it using a fewer number of coordinates (dimensions). The main objective of the report is to solve a common problem: Given a (typically high-dimensional) dataset, determine whether the number of dimensions are redundant, and if so, find a lower dimensional representation of it. We introduce different approaches for ID estimation, motivate them theoretically and compare them using both synthetic and real datasets. The first three methods estimate the ID of a dataset while the fourth finds a low dimensional version of the data. This is a useful order in which to organize the task, given an estimate of the ID of a dataset, construct a simpler version of the dataset using this number of dimensions. The results show that it is possible to obtain a remarkable decrease in high-dimensional data. The different methods give similar results despite their different theoretical backgrounds and behave as expected when using them on synthetic datasets with known ID.
Denna rapport ger en genomgång av olika metoder för skattning av inre dimension (ID). Principen bakom begreppet ID är att det ofta är möjligt att hitta strukturer i data som gör det möjligt att uttrycka samma data på nytt med ett färre antal koordinater (dimensioner). Syftet med detta projekt är att lösa ett vanligt problem: given en (vanligtvis högdimensionell) datamängd, avgör om antalet dimensioner är överflödiga, och om så är fallet, hitta en representation av datamängden som har ett mindre antal dimensioner. Vi introducerar olika tillvägagångssätt för skattning av inre dimension, går igenom teorin bakom dem och jämför deras resultat för både syntetiska och verkliga datamängder. De tre första metoderna skattar den inre dimensionen av data medan den fjärde hittar en lägre-dimensionell version av en datamängd. Denna ordning är praktisk för syftet med projektet, när vi har en skattning av den inre dimensionen av en datamängd kan vi använda denna skattning för att konstruera en enklare version av datamängden som har detta antal dimensioner. Resultaten visar att för högdimensionell data går det att reducera antalet dimensioner avsevärt. De olika metoderna ger liknande resultat trots deras olika teoretiska bakgrunder, och ger väntade resultat när de används på syntetiska datamängder vars inre dimensioner redan är kända.
Style APA, Harvard, Vancouver, ISO itp.
48

Bourrier, Anthony. "Compressed sensing and dimensionality reduction for unsupervised learning". Phd thesis, Université Rennes 1, 2014. http://tel.archives-ouvertes.fr/tel-01023030.

Pełny tekst źródła
Streszczenie:
Cette thèse est motivée par la perspective de rapprochement entre traitement du signal et apprentissage statistique, et plus particulièrement par l'exploitation de techniques d'échantillonnage compressé afin de réduire le coût de tâches d'apprentissage. Après avoir rappelé les bases de l'échantillonnage compressé et mentionné quelques techniques d'analyse de données s'appuyant sur des idées similaires, nous proposons un cadre de travail pour l'estimation de paramètres de mélange de densités de probabilité dans lequel les données d'entraînement sont compressées en une représentation de taille fixe. Nous instancions ce cadre sur un modèle de mélange de Gaussiennes isotropes. Cette preuve de concept suggère l'existence de garanties théoriques de reconstruction d'un signal pour des modèles allant au-delà du modèle parcimonieux usuel de vecteurs. Nous étudions ainsi dans un second temps la généralisation de résultats de stabilité de problèmes inverses linéaires à des modèles tout à fait généraux de signaux. Nous proposons des conditions sous lesquelles des garanties de reconstruction peuvent être données dans un cadre général. Enfin, nous nous penchons sur un problème de recherche approchée de plus proche voisin avec calcul de signature des vecteurs afin de réduire la complexité. Dans le cadre où la distance d'intérêt dérive d'un noyau de Mercer, nous proposons de combiner un plongement explicite des données suivi d'un calcul de signatures, ce qui aboutit notamment à une recherche approchée plus précise.
Style APA, Harvard, Vancouver, ISO itp.
49

Shekhar, Karthik. "Dimensionality reduction in immunology : from viruses to cells". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/98339.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, February 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 301-318).
Developing successful prophylactic and therapeutic strategies against infections of RNA viruses like HIV requires a combined understanding of the evolutionary constraints of the virus, as well as of the immunologic determinants associated with effective viremic control. Recent technologies enable viral and immune parameters to be measured at an unprecedented scale and resolution across multiple patients, and the resulting data could be harnessed towards these goals. Such datasets typically involve a large number of parameters; the goal of analysis is to infer underlying biological relationships that connect these parameters by examining the data. This dissertation combines principles and techniques from the physical and the computational sciences to "reduce the dimensionality" of such data in order to reveal novel biological relationships of relevance to vaccination and therapeutic strategies. Much of our work is concerned with HIV. 1. How can collective evolutionary constraints be inferred from viral sequences derived from infected patients? Using principles of Random Matrix Theory, we derive a low dimensional representation of HIV proteins based on circulating sequence data and identify independent groups of residues within viral proteins that are coordinately linked. One such group of residues within the polyprotein Gag exhibits statistical signatures indicative of strong constraints that limit the viability of a higher proportion of strains bearing multiple mutations in this group. We validate these predictions from independent experimental data, and based on our results, propose candidate immunogens for the Caucasian American population that target these vulnerabilities. 2. To what extent do mutational patterns observed in circulating viral strains accurately reflect intrinsic fitness constraints of viral proteins? Each strain is the result of evolution against an immune background, which is highly diverse across patients. Spin models constructed to reproduce the prevalence of sequences have tested positively against intrinsic fitness assays (where immune selection is absent). Why "prevalence" should correlate with "replicative fitness" in the case of such complex evolutionary dynamics is conceptually puzzling. We combine computer simulations and analytical theory to show that the prevalence can correctly reflect the fitness rank order of mutant viral strains that are proximal in sequence space. Our analysis suggests that incorporating a "phylogenetic correction" in the parameters might improve the predictive power of these models. 3. Can cellular phenotypes be discovered in an unbiased way from high dimensional protein expression data in single cells? Mass cytometry, where > 40 protein parameters can be quantitated in single cells affords a route, but analyzing such high dimensional data can be challenging. Traditional "gating approaches" are unscalable, and computational methods that account for multivariate relationships among different proteins are needed. High-dimensional clustering and principal component analysis, two approaches that have been explored so far, suffer from important limitations. We propose a computational tool rooted in nonlinear dimensionality reduction which overcomes these limitations, and automatically identifies phenotypes based on a two-dimensional distillation of the cellular data; the latter feature facilitates unbiased visualization of high dimensional relationships. Our tool reveals a previously unappreciated phenotypic complexity within murine CD8+ T cells, and identifies a novel phenotype that is conflated by traditional approaches. 4. Antigen-specific immune cells that mediate efficacious antiviral responses in infections like HIV involve complex phenotypes and typically constitute a small fraction of the population. In such circumstances, seeking correlative features in bulk expression levels of key proteins can be misleading. Using the approach introduced in 3., we analyze multiparameter flow cytometry data of CD4+ T-cell samples from 20 patients representing diverse clinical groups, and identify cellular phenotypes whose proportion in patients is strongly correlated with quantitative clinical parameters. Many of these correlations are inconsistent with bulk signals. Furthermore, a number of correlative phenotypes are characterized by the expression of multiple proteins at individually modest levels; such subsets are likely be missed by conventional gating strategies. Using the in-patient proportions of different phenotypes as predictors, a cross-validated, sparse linear regression model explains 87 % of the variance in the viral load across the twenty patients. Our approach is scalable to datasets involving dozens of parameters.
by Karthik Shekhar.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
50

Payne, Terry R. "Dimensionality reduction and representation for nearest neighbour learning". Thesis, University of Aberdeen, 1999. https://eprints.soton.ac.uk/257788/.

Pełny tekst źródła
Streszczenie:
An increasing number of intelligent information agents employ Nearest Neighbour learning algorithms to provide personalised assistance to the user. This assistance may be in the form of recognising or locating documents that the user might find relevant or interesting. To achieve this, documents must be mapped into a representation that can be presented to the learning algorithm. Simple heuristic techniques are generally used to identify relevant terms from the documents. These terms are then used to construct large, sparse training vectors. The work presented here investigates an alternative representation based on sets of terms, called set-valued attributes, and proposes a new family of Nearest Neighbour learning algorithms that utilise this set-based representation. The importance of discarding irrelevant terms from the documents is then addressed, and this is generalised to examine the behaviour of the Nearest Neighbour learning algorithm with high dimensional data sets containing such values. A variety of selection techniques used by other machine learning and information retrieval systems are presented, and empirically evaluated within the context of a Nearest Neighbour framework. The thesis concludes with a discussion of ways in which attribute selection and dimensionality reduction techniques may be used to improve the selection of relevant attributes, and thus increase the reliability and predictive accuracy of the Nearest Neighbour learning algorithm.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii