Dissertations / Theses on the topic 'Directional data'

To see the other types of publications on this topic, follow the link: Directional data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Directional data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

DEMNI, Houyem. "Depth-based classification approaches for directional data." Doctoral thesis, Università degli studi di Cassino, 2021. http://hdl.handle.net/11580/83781.

Full text
Abstract:
Supervised learning tasks aim to define a data-based rule by which new objects are assigned to one of the given classes. To this end, a training set containing objects with known memberships is exploited. Directional data are points lying on the surface of circles, spheres or hyper-spheres. Given that they lie on a non-linear manifold, directional observations require specific methods to be analyzed. In this thesis, the main interest is to present novel methodologies and to perform reliable inferences for directional data, within the framework of supervised classification. First, a supervised classification procedure for directional data is introduced. The procedure is based on the cumulative distribution of the cosine depth, that is a directional distance-based depth function. The proposed method is compared with the max-depth classifier, a well-known depth-based classifier within the literature, through simulations and a real data example. Second, we study the optimality of the depth distribution and the max-depth classifiers from a theoretical perspective. More specifically, we investigate the necessary conditions under which the classifiers are optimal in the sense of the optimal Bayes rule. Then, we study the robustness of some directional depth-based classifiers in the presence of contaminated data. The performance of the depth distribution classifier, the max-depth classifier and the DD-classifier is evaluated by means of simulations in the presence of both class and attribute noise. Finally, the last part of the thesis is devoted to evaluate the performance of depth-based classifiers on a real directional data set.
APA, Harvard, Vancouver, ISO, and other styles
2

Correia, Arthur Endlein. "Methods and applications for geological directional data analysis." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/44/44141/tde-24082017-080342/.

Full text
Abstract:
OpenStereo foi desenvolvido originalmente para preencher uma lacuna entre aplicativos de anáalise para geologia estrutural, como um software livre, gratuito e multi-plataforma. Ao longo dos anos ele adquiriu um grande núumero de usuários, com citações regulares. Este trabalho objetivou a reestruturação do OpenStereo como um todo, mudando-o para uma nova estrutura de interface gráfica e construíndo-o do zero visando desempenho, estabilidade e facilidade de manutenção e extensão. Diversas novas funcionalidades foram incluídas tais como projetos, conversão de notação de atitudes, ajuste de pequenos círculos, extração de atitudes de modelos tridimensionais e conversão de shapefiles de linhas para dados circulares. A pesquisa gerou dois subprodutos principais: um novo método gráfico para ajuste de pequenos círculos e a biblioteca de análise de dados estruturais Auttitude.
OpenStereo was originally developed to fill a gap among software packages for structural geology analysis, as a free open source cross-platform software. Over the years it has acquired a great number of users, with regular citations. This work aimed to restructure OpenStereo as a whole, changing to a new graphical interface framework and building it from the ground up for speed, stability, ease of maintenance and extension. Many new functionalities were also included, such as project management, structural attitudes notation handling, small circle fitting, extractions of attitudes from three-dimensional models and conversion of lines shapefiles to circular data. The research involved had two main byproducts, a new graphical method for small circle data fitting and a directional data analysis library, Auttitude.
APA, Harvard, Vancouver, ISO, and other styles
3

Kittivoravitkul, Sasivimol. "A bi-directional transformation approach for semistructured data integration." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

黎文傑 and Man-kit Lai. "Some results on the statistical analysis of directional data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lai, Man-kit. "Some results on the statistical analysis of directional data /." [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13787950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Shaosong. "Analysis of WACSIS data using a directional hybrid wave model." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4744.

Full text
Abstract:
This study focuses on the analysis of measured directional seas using a nonlinear model, named Directional Hybrid Wave Model (DHWM). The model has the capability of decomposing the directional wave field into its free wave components with different frequency, amplitude, direction and initial phase based on three or more time series of measured wave properties. With the information of free waves, the DHWM can predict wave properties accurately up to the second order in wave steepness. In this study, the DHWM is applied to the analyses of the data of Wave Crest Sensor Inter-comparison Study (WACSIS). The consistency between the measurements collected by different sensors in the WACSIS project was examined to ensure the data quality. The wave characteristics at the locations of selected sensors were predicted in time domain and were compared with those recorded at the same location. The degree of agreement between the predictions and the related measurements is an indicator of the consistency among different sensors. To analyze the directional seas in the presence of strong current, the original DHWM was extended to consider the Doppler effects of steady and uniform currents on the directional wave field. The advantage of extended DHWM originates from the use of the intrinsic frequency instead of the apparent frequency to determine the corresponding wavenumber and transfer functions relating wave pressure and velocities to elevation. Furthermore, a new approach is proposed to render the accurate and consistent estimates of the energy spreading parameter and mean wave direction of directional seas based on a cosine-2s model. In this approach, a Maximum Likelihood Method (MLM) is employed. Because it is more tolerant of errors in the estimated cross spectrum than a Directional Fourier Transfer (DFT) used in the conventional approach, the proposed approach is able to estimate the directional spreading parameters more accurately and consistently, which is confirmed by applying the proposed and conventional approach, respectively, to the time series generated by numerical simulation and recorded during the WACSIS project.
APA, Harvard, Vancouver, ISO, and other styles
7

Tao, Ran. "Using directional change for information extraction in financial market data." Thesis, University of Essex, 2018. http://repository.essex.ac.uk/23341/.

Full text
Abstract:
Directional change (DC) is a new concept for summarizing market dynamics. Instead of sampling the financial market at fixed intervals as in the traditional time series analysis, by contrast, DC is data-driven: the price change itself dictates when a price is recorded. DC provides us with a complementary way to extract information from data. The data sampled at irregular time intervals in DC allows us to observe features that may not be recognized under time series. In this thesis we propose our new method for the summarizing of financial markets through the use of the DC framework. Firstly, we define what is the vocabulary needed for a DC market summary. The vocabulary includes DC indicators and metrics. DC indicators are used to build a DC market summary for a single market. DC metrics help us quantitatively measure the differences between two markets under the directional change method. We demonstrate how such metrics could quantitatively measure the differences between different DC market summaries. Then, with real financial market data studied using DC, we aim to demonstrate the practicability of DC market analysis, as a complementary method to that of time series, in the analysis of the financial market.
APA, Harvard, Vancouver, ISO, and other styles
8

Ramler, Ivan Peter. "Improved statistical methods for k-means clustering of noisy and directional data." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bhattacharya, Sumit. "A Real-Time Bi-Directional Global Positioning System Data Link Over Internet Protocol." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1121355433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Stylianidis, Matthaios. "Instability of a bi-directional TiFGAN in unsupervised speech representation learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302026.

Full text
Abstract:
A major challenge in the application of machine learning in the speech domain is the unavailability of annotated data. Supervised machine learning techniques are highly dependent on the amount of labelled data and the quality of the labels. On the other hand, unsupervised training methods do not require labels and hence allow for the use of much larger unlabelled datasets. In this thesis work we investigate the use of an unsupervised training method for learning representations of speech data. More specifically, we extend an existing Wasserstein Generative Adversarial Network (WGAN) architecture called the Time-Frequency GAN (TiFGAN), originally purposed for unconditional speech generation, into a bi-directional architecture capable of learning representations. We investigate the abilities of our proposed bi-directional architecture (BiTiFGAN) in learning speech representations by evaluating the learned representations in the supervised task of keyword detection using the Speech Commands dataset. We observe that the training of our model is characterized by instability and in an attempt to stabilize training we try several different configurations for our architecture and training parameters. Mode collapse in the encoder is a common problem across our experiments, decreasing the performance acquired with the learned representations and making training unstable. Nonetheless, by increasing the capacity of our BiTiFGAN discriminator we successfully learn representations that are competitive when compared to our baseline representations such as the Mel-frequency Cepstrum Coefficients (MFCC) or Filter Bank Energy (FBANK) features.
En återkommande utmaning i tillämpningen av maskininlärning i taldomänen är bristen på annoterad data. Övervakad maskininlärning beror i hög grad på mängden och kvalitén på annoterad data. I motsats till övervakad inlärning kan icke-övervakade tekniker för maskininlärning lära sig från icke-annoterad data och gör det därför praktiskt genomförbart att använda enorma mängder data för träning. I det här arbetet undersöker vi hur en icke-övervakad träningsmetod kan användas för att lära sig representationer av tal-data. Vi utökar en existerande arkitektur, Time-Frequency GAN (TiFGAN), en version av Was- serstein Generative Adversarial Networks (WGAN) ursprungligen utvecklad för obetingad tal-generering, till en tvåvägs-arkitektur som kan lära sig representationer. Vi undersöker hur väl vår föreslagna utökade tvåvägs-arkitektur (BiTiFGAN) kan lära sig representationer för tal genom att utvärdera de inlärda representationerna i en övervakad uppgift för nyckelords-igenkänning baserad på datamängden Speech Commands. Vi observerar att träningen av vår modell karaktäriseras av instabilitet och för att försöka stabilisera den utforskar vi flera olika variationer av arkitektur och tränings-parametrar. Vi finner att modal-kollaps (eng. mode collapse) i kodaren är ett återkommande problem i våra experiment och försämrar de inlärda representationernas prestanda samt är en bidragande faktor till instabilitet i träningen. Trots det kan vi genom att öka kapaciteten i vår BiTiFGAN-diskriminator framgångsrikt träna representationer som är jämförbara med baslinjer för tal-representation: Mel- Frequency Cepstrum Coefficients (MFCC) och Filter Bank Energy (FBANK).
APA, Harvard, Vancouver, ISO, and other styles
11

Draycott, Samuel Thomas. "On the re-creation of site-specific directional wave conditions." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31472.

Full text
Abstract:
Wave tank tests facilitate the understanding of how complex sea conditions influence the dynamics of man-made structures. If a potential deployment location is known, site data can be used to improve the relevance and realism of the test conditions, thus helping de-risk device development. Generally this data is difficult to obtain and even if available is used simplistically due to established practices and limitations of test facilities. In this work four years of buoy data from the European Marine Energy Centre is characterised and simulated at the FloWave Ocean Energy Research Facility; a circular combined wave-current test tank. Particular emphasis is placed on the characterisation and validation processes, aiming to preserve spectral and directional complexity of the site, whilst proving that the defined representative conditions can be effectively created. When creating representative site-specific sea states, particular focus is given to the application of clustering algorithms, which enable the entire spectral (frequency or directional) form to be considered in the characterisation process. This enables the true complex nature of the site to be considered in the data reduction process. Prior to generating and measuring the resulting sea states, issues with scaling are explored, the facility itself is characterised, and emphasis is placed on developing measurement strategies for the validation of directional spectra. Wave gauge arrays are designed and used to characterise various elements of the FloWave tank, including reflections, spatio-temporal variability and wave shape. A new method for directional spectrum reconstruction (SPAIR) is also developed, enabling more effective measurement and validation of the resulting directional sea states. Through comparison with other characterisation methods, inherent method-induced trade-offs are understood, and it is found that there is no absolute favourable approach, necessitating an application specific procedure. Despite this, a useful set of 'generic' sea states are created for the simulation of both production and extreme conditions. For sea state measurement, the SPAIR method is proven to be significantly more effective than current approaches, reducing errors and introducing additional capability. This method is used in combination with a directional wave gauge array to effectively measure, correct, and validate the resulting directional wave conditions. It is also demonstrated that site-specific wave-current scenarios can be effectively re-created, thus demonstrating that truly complex ocean conditions can be simulated at FloWave. This ability, along with the considered characterisation approach used, means that representative site-specific sea states can be simulated with confidence, increasing the realism of the test environment and helping de-risk device development.
APA, Harvard, Vancouver, ISO, and other styles
12

Vural, Serdar. "Information propagation in wireless sensor networks using directional antennas." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1188006033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Junwei. "Estimating directional migration flows indirectly using age-specific net migration data: A case study of Mexico." Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p1430183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Merrifield, Alistair James. "An Investigation Of Mathematical Models For Animal Group Movement, Using Classical And Statistical Approaches." Thesis, The University of Sydney, 2006. http://hdl.handle.net/2123/1132.

Full text
Abstract:
Collective actions of large animal groups result in elaborate behaviour, whose nature can be breathtaking in their complexity. Social organisation is the key to the origin of this behaviour and the mechanisms by which this organisation occurs are of particular interest. In this thesis, these mechanisms of social interactions and their consequences for group-level behaviour are explored. Social interactions amongst individuals are based on simple rules of attraction, alignment and orientation amongst neighbouring individuals. As part of this study, we will be interested in data that takes the form of a set of directions in space. In Chapter 2, we discuss relevant statistical measure and theory which will allow us to analyse directional data. These statistical tools will be employed on the results of the simulations of the mathematical models formulated in the course of the thesis. The first mathematical model for collective group behaviour is a Lagrangian self-organising model, which is formulated in Chapter 3. This model is based on basic social interactions between group members. Resulting collective behaviours and other related issues are examined during this chapter. Once we have an understanding of the model in Chapter 3, we use this model in Chapter 4 to investigate the idea of guidance of large groups by a select number of individuals. These individuals are privy to information regarding the location of a specific goal. This is used to explore a mechanism proposed for honeybee (Apis mellifera) swarm migrations. The spherical theory introduced in Chapter 2 will prove to be particularly useful in analysing the results of the modelling. In Chapter 5, we introduce a second mathematical model for aggregative behaviour. The model uses ideas from electromagnetic forces and particle physics, reinterpreting them in the context of social forces. While attraction and repulsion terms have been included in similar models in past literature, we introduce an orientation force to our model and show the requirement of a dissipative force to prevent individuals from escaping from the confines of the group.
APA, Harvard, Vancouver, ISO, and other styles
15

Merrifield, Alistair James. "An Investigation Of Mathematical Models For Animal Group Movement, Using Classical And Statistical Approaches." University of Sydney, 2006. http://hdl.handle.net/2123/1132.

Full text
Abstract:
Doctor of Philosophy
Collective actions of large animal groups result in elaborate behaviour, whose nature can be breathtaking in their complexity. Social organisation is the key to the origin of this behaviour and the mechanisms by which this organisation occurs are of particular interest. In this thesis, these mechanisms of social interactions and their consequences for group-level behaviour are explored. Social interactions amongst individuals are based on simple rules of attraction, alignment and orientation amongst neighbouring individuals. As part of this study, we will be interested in data that takes the form of a set of directions in space. In Chapter 2, we discuss relevant statistical measure and theory which will allow us to analyse directional data. These statistical tools will be employed on the results of the simulations of the mathematical models formulated in the course of the thesis. The first mathematical model for collective group behaviour is a Lagrangian self-organising model, which is formulated in Chapter 3. This model is based on basic social interactions between group members. Resulting collective behaviours and other related issues are examined during this chapter. Once we have an understanding of the model in Chapter 3, we use this model in Chapter 4 to investigate the idea of guidance of large groups by a select number of individuals. These individuals are privy to information regarding the location of a specific goal. This is used to explore a mechanism proposed for honeybee (Apis mellifera) swarm migrations. The spherical theory introduced in Chapter 2 will prove to be particularly useful in analysing the results of the modelling. In Chapter 5, we introduce a second mathematical model for aggregative behaviour. The model uses ideas from electromagnetic forces and particle physics, reinterpreting them in the context of social forces. While attraction and repulsion terms have been included in similar models in past literature, we introduce an orientation force to our model and show the requirement of a dissipative force to prevent individuals from escaping from the confines of the group.
APA, Harvard, Vancouver, ISO, and other styles
16

Margaronis, Zannis N. P. "The significance of mapping data sets when considering commodity time series and their use in algorithmically-traded portfolios." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/12575.

Full text
Abstract:
Many econometric analyses of commodity futures over the years have been performed using spot or front month contract prices. Using such daily prices without the consideration of the associated contract traded volumes is slightly erroneous because, in reality, traders will typically trade the ‘most liquid’ contract, that is, the contract with the largest average daily volume (ADV). The reason for this is in order to gain the best price when buying or selling. If this ‘true’ time series is to be considered, a mapping procedure is required to account for the price jumps at the time when a trader trades out of the expiring contract and enters the new front month contract. A key finding was that this effect was significant, irrespective of the size of the price jump, sometimes referred to as basis or roll and also due to the accumulated roll over a number of years corresponding to multiple contracts. It was also found that the mapping procedure has a significant effect on the time series and should hence always be employed if the realistic traded time series is to be considered. Given this phenomenon, algorithmically-traded commodities futures must necessarily employ such time series when creating metrics or considering an econometric analysis. The key findings include the importance of diversification in algorithmically-traded portfolios, utilising the AOM and PSI metrics. The mapping of data sets to create realistic ‘live-traded’ time series was found to be significant, while the optimal day of roll over prior to contract expiry was found to be related to the trading volumes for certain commodities. Other key findings include the causalities and spillovers within the metals sector where various relationships are evident once the results were processed and analysed, both pre and post mapping. Interestingly, the key relationships including bidirectional volatility and shock spillovers between the four key metals existed when the unmapped data was used however, many of the feedbacks within these relationships was lost when the mapped data sets were considered. A significant finding was therefore the consistent differences in findings between mapped and unmapped data sets attributed to the optimisation or favourability of the models (whether econometric or algorithmic). This is due to the unmapped data including roll or basis (which the models are fitted to) taking into account the roll or basis and utilising them in finding relationships between data sets. In the mapped data set (the time series seen by traders) the roll or basis is accounted for and hence the relationships found stand in real-time trading situations. The differences in the results show how the effect of mapping can be significant with unmapped data sets displaying results which will not exist in a real time traded time series.
APA, Harvard, Vancouver, ISO, and other styles
17

Bukowski, Edward F., T. Gordon Brown, Tim Brosseau, and Fred J. Brandon. "In-Bore Acceleration Measurements of an Electromagnetic Gun Launcher." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606161.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
The US Army Research Laboratory has been involved in the design and implementation of electromagnetic gun technology for the past several years. One of the primary factors of this research is an accurate assessment of in-bore structural loads on the launch projectiles. This assessment is essential for the design of mass-efficient launch packages for electromagnetic guns. If not properly accounted for, projectile failure can result. In order to better understand the magnitude of the in-bore loads, a data-recorder was integrated with an armature and on-board payload that included tri-directional accelerometers and magnetic field sensors. Several packages were launched from an electromagnetic railgun located at Aberdeen Proving Ground, MD. Substantial effort was placed on soft-catching the rounds in order to facilitate data recovery. Analysis of the recovered data provided acceleration and magnetic field data acquired during the launch event.
APA, Harvard, Vancouver, ISO, and other styles
18

Keul, Kevin [Verfasser], Stefan [Akademischer Betreuer] Müller, Stefan [Gutachter] Müller, and Thorsten [Gutachter] Grosch. "The Line Space - a Directional Data Structure for Ray Tracing Acceleration / Kevin Keul ; Gutachter: Stefan Müller, Thorsten Grosch ; Betreuer: Stefan Müller." Koblenz, 2021. http://d-nb.info/1229919589/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cutting, Christine. "Testing uniformity against rotationally symmetric alternatives on high-dimensional spheres." Doctoral thesis, Universite Libre de Bruxelles, 2020. https://dipot.ulb.ac.be/dspace/bitstream/2013/306900/4/Main.pdf.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de tester en grande dimension l'uniformité sur la sphère-unité $S^{p_n-1}$ (la dimension des observations, $p_n$, dépend de leur nombre, $n$, et être en grande dimension signifie que $p_n$ tend vers l'infini en même temps que $n$). Nous nous restreignons dans un premier temps à des contre-hypothèses ``monotones'' de densité croissante le long d'une direction ${\pmb \theta}_n\in S^{p_n-1}$ et dépendant d'un paramètre de concentration $\kappa_n>0$. Nous commençons par identifier le taux $\kappa_n$ auquel ces contre-hypothèses sont contiguës à l'uniformité ;nous montrons ensuite grâce à des résultats de normalité locale asymptotique, que le test d'uniformité le plus classique, le test de Rayleigh, n'est pas optimal quand ${\pmb \theta}_n$ est connu mais qu'il le devient à $p$ fixé et dans le cas FvML en grande dimension quand ${\pmb \theta}_n$ est inconnu.Dans un second temps, nous considérons des contre-hypothèses ``axiales'', attribuant la même probabilité à des points diamétralement opposés. Elles dépendent aussi d'un paramètre de position ${\pmb \theta}_n\in S^{p_n-1}$ et d'un paramètre de concentration $\kappa_n\in\R$. Le taux de contiguïté s'avère ici plus élevé et suggère un problème plus difficile que dans le cas monotone. En effet, le test de Bingham, le test classique dans le cas axial, n'est pas optimal à ${\pmb \theta}_n$ inconnu et $p$ fixé, et ne détecte pas les contre-hypothèses contiguës en grande dimension. C'est pourquoi nous nous tournons vers des tests basés sur les plus grande et plus petite valeurs propres de la matrice de variance-covariance et nous déterminons leurs distributions asymptotiques sous les contre-hypothèses contiguës à $p$ fixé.Enfin, à l'aide d'un théorème central limite pour martingales, nous montrons que sous certaines conditions et après standardisation, les statistiques de Rayleigh et de Bingham sont asymptotiquement normales sous l'hypothèse d'invariance par rotation des observations. Ce résultat permet non seulement d'identifier le taux auquel le test de Bingham détecte des contre-hypothèses axiales mais aussi celui auquel il détecte des contre-hypothèses monotones.
In this thesis we are interested in testing uniformity in high dimensions on the unit sphere $S^{p_n-1}$ (the dimension of the observations, $p_n$, depends on their number, and high-dimensional data are such that $p_n$ diverges to infinity with $n$).We consider first ``monotone'' alternatives whose density increases along an axis ${\pmb \theta}_n\in S^{p_n-1}$ and depends on a concentration parameter $\kappa_n>0$. We start by identifying the rate at which these alternatives are contiguous to uniformity; then we show thanks to local asymptotic normality results that the most classical test of uniformity, the Rayleigh test, is not optimal when ${\pmb \theta}_n$ is specified but becomes optimal when $p$ is fixed and in the high-dimensional FvML case when ${\pmb \theta}_n$ is unspecified.We consider next ``axial'' alternatives, assigning the same probability to antipodal points. They also depend on a location parameter ${\pmb \theta}_n\in S^{p_n-1}$ and a concentration parameter $\kappa_n\in\R$. The contiguity rate proves to be higher in that case and implies that the problem is more difficult than in the monotone case. Indeed, the Bingham test, the classical test when dealing with axial data, is not optimal when $p$ is fixed and ${\pmb \theta}_n$ is not specified, and is blind to the contiguous alternatives in high dimensions. This is why we turn to tests based on the extreme eigenvalues of the covariance matrix and establish their fixed-$p$ asymptotic distributions under contiguous alternatives.Finally, thanks to a martingale central limit theorem, we show that, under some assumptions and after standardisation, the Rayleigh and Bingham test statistics are asymptotically normal under general rotationally symmetric distributions. It enables us to identify the rate at which the Bingham test detects axial alternatives and also monotone alternatives.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
20

Salah, Aghiles. "Von Mises-Fisher based (co-)clustering for high-dimensional sparse data : application to text and collaborative filtering data." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB093/document.

Full text
Abstract:
La classification automatique, qui consiste à regrouper des objets similaires au sein de groupes, également appelés classes ou clusters, est sans aucun doute l’une des méthodes d’apprentissage non-supervisé les plus utiles dans le contexte du Big Data. En effet, avec l’expansion des volumes de données disponibles, notamment sur le web, la classification ne cesse de gagner en importance dans le domaine de la science des données pour la réalisation de différentes tâches, telles que le résumé automatique, la réduction de dimension, la visualisation, la détection d’anomalies, l’accélération des moteurs de recherche, l’organisation d’énormes ensembles de données, etc. De nombreuses méthodes de classification ont été développées à ce jour, ces dernières sont cependant fortement mises en difficulté par les caractéristiques complexes des ensembles de données que l’on rencontre dans certains domaines d’actualité tel que le Filtrage Collaboratif (FC) et de la fouille de textes. Ces données, souvent représentées sous forme de matrices, sont de très grande dimension (des milliers de variables) et extrêmement creuses (ou sparses, avec plus de 95% de zéros). En plus d’être de grande dimension et sparse, les données rencontrées dans les domaines mentionnés ci-dessus sont également de nature directionnelles. En effet, plusieurs études antérieures ont démontré empiriquement que les mesures directionnelles, telle que la similarité cosinus, sont supérieurs à d’autres mesures, telle que la distance Euclidiennes, pour la classification des documents textuels ou pour mesurer les similitudes entre les utilisateurs/items dans le FC. Cela suggère que, dans un tel contexte, c’est la direction d’un vecteur de données (e.g., représentant un document texte) qui est pertinente, et non pas sa longueur. Il est intéressant de noter que la similarité cosinus est exactement le produit scalaire entre des vecteurs unitaires (de norme 1). Ainsi, d’un point de vue probabiliste l’utilisation de la similarité cosinus revient à supposer que les données sont directionnelles et réparties sur la surface d’une hypersphère unité. En dépit des nombreuses preuves empiriques suggérant que certains ensembles de données sparses et de grande dimension sont mieux modélisés sur une hypersphère unité, la plupart des modèles existants dans le contexte de la fouille de textes et du FC s’appuient sur des hypothèses populaires : distributions Gaussiennes ou Multinomiales, qui sont malheureusement inadéquates pour des données directionnelles. Dans cette thèse, nous nous focalisons sur deux challenges d’actualité, à savoir la classification des documents textuels et la recommandation d’items, qui ne cesse d’attirer l’attention dans les domaines de la fouille de textes et celui du filtrage collaborative, respectivement. Afin de répondre aux limitations ci-dessus, nous proposons une série de nouveaux modèles et algorithmes qui s’appuient sur la distribution de von Mises-Fisher (vMF) qui est plus appropriée aux données directionnelles distribuées sur une hypersphère unité
Cluster analysis or clustering, which aims to group together similar objects, is undoubtedly a very powerful unsupervised learning technique. With the growing amount of available data, clustering is increasingly gaining in importance in various areas of data science for several reasons such as automatic summarization, dimensionality reduction, visualization, outlier detection, speed up research engines, organization of huge data sets, etc. Existing clustering approaches are, however, severely challenged by the high dimensionality and extreme sparsity of the data sets arising in some current areas of interest, such as Collaborative Filtering (CF) and text mining. Such data often consists of thousands of features and more than 95% of zero entries. In addition to being high dimensional and sparse, the data sets encountered in the aforementioned domains are also directional in nature. In fact, several previous studies have empirically demonstrated that directional measures—that measure the distance between objects relative to the angle between them—, such as the cosine similarity, are substantially superior to other measures such as Euclidean distortions, for clustering text documents or assessing the similarities between users/items in CF. This suggests that in such context only the direction of a data vector (e.g., text document) is relevant, not its magnitude. It is worth noting that the cosine similarity is exactly the scalar product between unit length data vectors, i.e., L 2 normalized vectors. Thus, from a probabilistic perspective using the cosine similarity is equivalent to assuming that the data are directional data distributed on the surface of a unit-hypersphere. Despite the substantial empirical evidence that certain high dimensional sparse data sets, such as those encountered in the above domains, are better modeled as directional data, most existing models in text mining and CF are based on popular assumptions such as Gaussian, Multinomial or Bernoulli which are inadequate for L 2 normalized data. In this thesis, we focus on the two challenging tasks of text document clustering and item recommendation, which are still attracting a lot of attention in the domains of text mining and CF, respectively. In order to address the above limitations, we propose a suite of new models and algorithms which rely on the von Mises-Fisher (vMF) assumption that arises naturally for directional data lying on a unit-hypersphere
APA, Harvard, Vancouver, ISO, and other styles
21

Salah, Aghiles. "Von Mises-Fisher based (co-)clustering for high-dimensional sparse data : application to text and collaborative filtering data." Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=1858&f=11557.

Full text
Abstract:
La classification automatique, qui consiste à regrouper des objets similaires au sein de groupes, également appelés classes ou clusters, est sans aucun doute l’une des méthodes d’apprentissage non-supervisé les plus utiles dans le contexte du Big Data. En effet, avec l’expansion des volumes de données disponibles, notamment sur le web, la classification ne cesse de gagner en importance dans le domaine de la science des données pour la réalisation de différentes tâches, telles que le résumé automatique, la réduction de dimension, la visualisation, la détection d’anomalies, l’accélération des moteurs de recherche, l’organisation d’énormes ensembles de données, etc. De nombreuses méthodes de classification ont été développées à ce jour, ces dernières sont cependant fortement mises en difficulté par les caractéristiques complexes des ensembles de données que l’on rencontre dans certains domaines d’actualité tel que le Filtrage Collaboratif (FC) et de la fouille de textes. Ces données, souvent représentées sous forme de matrices, sont de très grande dimension (des milliers de variables) et extrêmement creuses (ou sparses, avec plus de 95% de zéros). En plus d’être de grande dimension et sparse, les données rencontrées dans les domaines mentionnés ci-dessus sont également de nature directionnelles. En effet, plusieurs études antérieures ont démontré empiriquement que les mesures directionnelles, telle que la similarité cosinus, sont supérieurs à d’autres mesures, telle que la distance Euclidiennes, pour la classification des documents textuels ou pour mesurer les similitudes entre les utilisateurs/items dans le FC. Cela suggère que, dans un tel contexte, c’est la direction d’un vecteur de données (e.g., représentant un document texte) qui est pertinente, et non pas sa longueur. Il est intéressant de noter que la similarité cosinus est exactement le produit scalaire entre des vecteurs unitaires (de norme 1). Ainsi, d’un point de vue probabiliste l’utilisation de la similarité cosinus revient à supposer que les données sont directionnelles et réparties sur la surface d’une hypersphère unité. En dépit des nombreuses preuves empiriques suggérant que certains ensembles de données sparses et de grande dimension sont mieux modélisés sur une hypersphère unité, la plupart des modèles existants dans le contexte de la fouille de textes et du FC s’appuient sur des hypothèses populaires : distributions Gaussiennes ou Multinomiales, qui sont malheureusement inadéquates pour des données directionnelles. Dans cette thèse, nous nous focalisons sur deux challenges d’actualité, à savoir la classification des documents textuels et la recommandation d’items, qui ne cesse d’attirer l’attention dans les domaines de la fouille de textes et celui du filtrage collaborative, respectivement. Afin de répondre aux limitations ci-dessus, nous proposons une série de nouveaux modèles et algorithmes qui s’appuient sur la distribution de von Mises-Fisher (vMF) qui est plus appropriée aux données directionnelles distribuées sur une hypersphère unité
Cluster analysis or clustering, which aims to group together similar objects, is undoubtedly a very powerful unsupervised learning technique. With the growing amount of available data, clustering is increasingly gaining in importance in various areas of data science for several reasons such as automatic summarization, dimensionality reduction, visualization, outlier detection, speed up research engines, organization of huge data sets, etc. Existing clustering approaches are, however, severely challenged by the high dimensionality and extreme sparsity of the data sets arising in some current areas of interest, such as Collaborative Filtering (CF) and text mining. Such data often consists of thousands of features and more than 95% of zero entries. In addition to being high dimensional and sparse, the data sets encountered in the aforementioned domains are also directional in nature. In fact, several previous studies have empirically demonstrated that directional measures—that measure the distance between objects relative to the angle between them—, such as the cosine similarity, are substantially superior to other measures such as Euclidean distortions, for clustering text documents or assessing the similarities between users/items in CF. This suggests that in such context only the direction of a data vector (e.g., text document) is relevant, not its magnitude. It is worth noting that the cosine similarity is exactly the scalar product between unit length data vectors, i.e., L 2 normalized vectors. Thus, from a probabilistic perspective using the cosine similarity is equivalent to assuming that the data are directional data distributed on the surface of a unit-hypersphere. Despite the substantial empirical evidence that certain high dimensional sparse data sets, such as those encountered in the above domains, are better modeled as directional data, most existing models in text mining and CF are based on popular assumptions such as Gaussian, Multinomial or Bernoulli which are inadequate for L 2 normalized data. In this thesis, we focus on the two challenging tasks of text document clustering and item recommendation, which are still attracting a lot of attention in the domains of text mining and CF, respectively. In order to address the above limitations, we propose a suite of new models and algorithms which rely on the von Mises-Fisher (vMF) assumption that arises naturally for directional data lying on a unit-hypersphere
APA, Harvard, Vancouver, ISO, and other styles
22

Kos, Cristoffer, and Kristoffer Hermansson. "BUILDING AND SIMULATING DYNAMIC MODELS OF DISTRICT HEATING NETWORKS WITH MODELICA : Using Matlab to process data and automate modelling and simulation." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-36107.

Full text
Abstract:
District heating systems are common in Nordic countries today and accounts for a great portion of the heat demand. In Sweden, total district heating end use in the last years has been around 50 TWh and district heating accounts for roughly 50 % of the total heat demand. Suppliers of district heating must balance demand and supply, often in large and complex networks. Heat propagation can be in the range of hours and it is not known in detail how the heat will propagate during transient conditions. A dynamic model has been developed in OpenModelica and a method for modeling, handling data, simulating and visualizing the results of a district heating network was developed using Matlab as core. Data from Mälarenergi AB, a district heating producer and grid operator, was used for validation of the model. Validation shows that the model works well in predicting heat propagation and temperature distribution in the network and that the model can be scaled up to a large number of heat exchangers and pipes. The model is robust and can handle bi-directional and reversing flows in complex ring structures. It was concluded that OpenModelica together with Matlab is a good combination for creating models of district heating networks, as a high degree of standardization and automation can be achieved. This, together with visualization of the heat propagation, makes it useful for the understanding of the district heating network during transient conditions.
Smarta Flöden
APA, Harvard, Vancouver, ISO, and other styles
23

Widener, Scott D. "Measuring Airport Efficiency with Fixed Asset Utilization to Minimize Airport Delays." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/485.

Full text
Abstract:
Deregulation of the airlines in the United States spawned a free-for-all system which led to a variety of agents within the aviation system all seeking to optimize their own piece of the aviation system, and the net result was that the aviation system itself was not optimized in aggregate, frequently resulting in delays. Research on the efficiency of the system has likewise focused on the individual agents, primarily focusing on the municipalities in an economic context, and largely ignoring the consumer. This paper develops the case for a systemic efficiency measurement which incorporates the interests of the airlines and the consumers with those of the airport operating municipalities in three different Data Envelopment Analysis (DEA) models: traditional Charnes-Cooper-Rhodes and Banker-Charnes-Cooper models, and a Directional Output Distance Function model, devised and interpreted using quality management principles. These models were combined to allow the resulting efficiencies of the operating configurations of the given airport to predict the efficiency of the associated airport. Based upon regression models, these efficiency measurements can be used as a diagnostic for improving the efficiency of the entire United States airspace, on a systemic basis, at the individual airport configuration level. An example analysis using this diagnostic is derived in the course of the development and description of the diagnostic and two additional case studies are presented.
APA, Harvard, Vancouver, ISO, and other styles
24

Walsh, David Leonard. "Directional statistics, Bayesian methods of earthquake focal mechanism estimation, and their application to New Zealand seismicity data : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master of Science in Statistics /." ResearchArchive@Victoria e-Thesis, 2008. http://hdl.handle.net/10063/350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sener, Emre. "Automatic Bayesian Segmentation Of Human Facial Tissue Using 3d Mr-ct Fusion By Incorporating Models Of Measurement Blurring, Noise And Partial Volume." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615091/index.pdf.

Full text
Abstract:
Segmentation of human head on medical images is an important process in a wide array of applications such as diagnosis, facial surgery planning, prosthesis design, and forensic identification. In this study, a new Bayesian method for segmentation of facial tissues is presented. Segmentation classes include muscle, bone, fat, air and skin. The method incorporates a model to account for image blurring during data acquisition, a prior helping to reduce noise as well as a partial volume model. Regularization based on isotropic and directional Markov Random Field priors are integrated to the algorithm and their effects on segmentation accuracy are investigated. The Bayesian model is solved iteratively yielding tissue class labels at every voxel of an image. Sub-methods as variations of the main method are generated by switching on/off a combination of the models. Testing of the sub-methods are performed on two patients using single modality three-dimensional (3D) images as well as registered multi-modal 3D images (Magnetic Resonance and Computerized Tomography). Numerical, visual and statistical analyses of the methods are conducted. Improved segmentation accuracy is obtained through the use of the proposed image models and multi-modal data. The methods are also compared with the Level Set method and an adaptive Bayesiansegmentation method proposed in a previous study.
APA, Harvard, Vancouver, ISO, and other styles
26

Vuollo, V. (Ville). "3D imaging and nonparametric function estimation methods for analysis of infant cranial shape and detection of twin zygosity." Doctoral thesis, Oulun yliopisto, 2018. http://urn.fi/urn:isbn:9789526218557.

Full text
Abstract:
Abstract The use of 3D imaging of craniofacial soft tissue has increased in medical science, and imaging technology has been developed greatly in recent years. 3D models are quite accurate and with imaging devices based on stereophotogrammetry, capturing the data is a quick and easy operation for the subject. However, analyzing 3D models of the face or head can be challenging and there is a growing need for efficient quantitative methods. In this thesis, new mathematical methods and tools for measuring craniofacial structures are developed. The thesis is divided into three parts. In the first part, facial 3D data of Lithuanian twins are used for the determination of zygosity. Statistical pattern recognition methodology is used for classification and the results are compared with DNA testing. In the second part of the thesis, the distribution of surface normal vector directions of a 3D infant head model is used to analyze skull deformation. The level of flatness and asymmetry are quantified by functionals of the kernel density estimate of the normal vector directions. Using 3D models from infants at the age of three months and clinical ratings made by experts, this novel method is compared with some previously suggested approaches. The method is also applied to clinical longitudinal research in which 3D images from three different time points are analyzed to find the course of positional cranial deformation and associated risk factors. The final part of the thesis introduces a novel statistical scale space method, SphereSiZer, for exploring the structures of a probability density function defined on the unit sphere. The tools developed in the second part are used for the implementation of SphereSiZer. In SphereSiZer, the scale-dependent features of the density are visualized by projecting the statistically significant gradients onto a planar contour plot of the density function. The method is tested by analyzing samples of surface unit normal vector data of an infant head as well as data from generated simulated spherical densities. The results and examples of the study show that the proposed novel methods perform well. The methods can be extended and developed in further studies. Cranial and facial 3D models will offer many opportunities for the development of new and sophisticated analytical methods in the future
Tiivistelmä Pään ja kasvojen pehmytkudoksen 3D-kuvantaminen on yleistynyt lääketieteessä, ja siihen tarvittava teknologia on kehittynyt huomattavasti viime vuosina. 3D-mallit ovat melko tarkkoja, ja kuvaus stereofotogrammetriaan perustuvalla laitteella on nopea ja helppo tilanne kuvattavalle. Kasvojen ja pään 3D-mallien analysointi voi kuitenkin olla haastavaa, ja tarve tehokkaille kvantitatiivisille menetelmille on kasvanut. Tässä väitöskirjassa kehitetään uusia matemaattisia kraniofakiaalisten rakenteiden mittausmenetelmiä ja -työkaluja. Työ on jaettu kolmeen osaan. Ensimmäisessä osassa pyritään määrittämään liettualaisten kaksosten tsygositeetti kasvojen 3D-datan perusteella. Luokituksessa hyödynnetään tilastollista hahmontunnistusta, ja tuloksia verrataan DNA-testituloksiin. Toisessa osassa analysoidaan pään epämuodostumia imeväisikäisten päiden 3D-kuvista laskettujen pintanormaalivektorien suuntiin perustuvan jakauman avulla. Tasaisuuden ja epäsymmetrian määrää mitataan normaalivektorien suuntakulmien ydinestimaatin funktionaalien avulla. Kehitettyä menetelmää verrataan joihinkin aiemmin ehdotettuihin lähestymistapoihin mittaamalla kolmen kuukauden ikäisten imeväisten 3D-malleja ja tarkastelemalla asiantuntijoiden tekemiä kliinisiä pisteytyksiä. Menetelmää sovelletaan myös kliiniseen pitkittäistutkimukseen, jossa tutkitaan pään epämuodostumien ja niihin liittyvien riskitekijöiden kehitystä kolmena eri ajankohtana otettujen 3D-kuvien perusteella. Viimeisessä osassa esitellään uusi tilastollinen skaala-avaruusmenetelmä SphereSiZer, jolla tutkitaan yksikköpallon tiheysfunktion rakenteita. Toisessa osassa kehitettyjä työkaluja sovelletaan SphereSiZerin toteutukseen. SphereSiZer-menetelmässä tiheysfunktion eri skaalojen piirteet visualisoidaan projisoimalla tilastollisesti merkitsevät gradientit tiheysfunktiota kuvaavalle isoviivakartalle. Menetelmää sovelletaan imeväisikäisen pään pintanormaalivektoridataan ja simuloituihin, pallotiheysfunktioihin perustuviin otoksiin. Tulosten ja esimerkkien perusteella väitöskirjassa esitetyt uudet menetelmät toimivat hyvin. Menetelmiä voidaan myös kehittää edelleen ja laajentaa jatkotutkimuksissa. Pään ja kasvojen 3D-mallit tarjoavat paljon mahdollisuuksia uusien ja laadukkaiden analyysityökalujen kehitykseen myöhemmissä tutkimuksissa
APA, Harvard, Vancouver, ISO, and other styles
27

d'Orso, Julien. "New Directions in Symbolic Model Checking." Doctoral thesis, Uppsala University, Department of Information Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3753.

Full text
Abstract:

In today's computer engineering, requirements for generally high reliability have pushed the notion of testing to its limits. Many disciplines are moving, or have already moved, to more formal methods to ensure correctness. This is done by comparing the behavior of the system as it is implemented against a set of requirements. The ultimate goal is to create methods and tools that are able to perform this kind of verfication automatically: this is called Model Checking.

Although the notion of model checking has existed for two decades, adoption by the industry has been hampered by its poor applicability to complex systems. During the 90's, researchers have introduced an approach to cope with large (even infinite) state spaces: Symbolic Model Checking. The key notion is to represent large (possibly infinite) sets of states by a small formula (as opposed to enumerating all members). In this thesis, we investigate applying symbolic methods to different types of systems:

Parameterized systems. We work whithin the framework of Regular Model Chacking. In regular model checking, we represent a global state as a word over a finite alphabet. A transition relation is represented by a regular length-preserving transducer. An important operation is the so-called transitive closure, which characterizes composing a transition relation with itself an arbitrary number of times. Since completeness cannot be achieved, we propose methods of computing closures that work as often as possible.

Games on infinite structures. Infinite-state systems for which the transition relation is monotonic with respect to a well quasi-ordering on states can be analyzed. We lift the framework of well quasi-ordered domains toward games. We show that monotonic games are in general undecidable. We identify a subclass of monotonic games: downward-closed games. We propose an algorithm to analyze such games with a winning condition expressed as a safety property.

Probabilistic systems. We present a framework for the quantitative analysis of probabilistic systems with an infinite state-space: given an initial state sinit, a set F of final states, and a rational Θ > 0, compute a rational ρ such that the probability of reaching F form sinit is between ρ and ρ + Θ. We present a generic algorithm and sufficient conditions for termination.

APA, Harvard, Vancouver, ISO, and other styles
28

Hamsici, Onur C. "Bayes Optimality in Classification, Feature Extraction and Shape Analysis." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218513562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Otieno, Bennett Sango. "An Alternative Estimate of Preferred Direction for Circular Data." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/28401.

Full text
Abstract:
Circular or Angular data occur in many fields of applied statistics. A common problem of interest in circular data is estimating a preferred direction and its corresponding distribution. This problem is complicated by the so-called wrap-around effect, which exists because there is no minimum or maximum on the circle. The usual statistics employed for linear data are inappropriate for directional data, as they do not account for the circular nature of directional data. Common choices for summarizing the preferred direction are the sample circular mean, and sample circular median. A newly proposed circular analog of the Hodges-Lehmann estimator is proposed, as an alternative estimate of preferred direction. The new measure of preferred direction is a robust compromise between circular mean and circular median. Theoretical results show that the new measure of preferred direction is asymptotically more efficient than the circular median and that its asymptotic efficiency relative to the circular mean is quite comparable. Descriptions of how to use the methods for constructing confidence intervals and testing hypotheses are provided. Simulation results demonstrate the relative strengths and weaknesses of the new approach for a variety of distributions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Strand, Matthias. "The Business Value of Data Warehouses : Opportunities, Pitfalls and Future Directions." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-490.

Full text
Abstract:

Organisations have spent billions of dollars (USD) on investments in data warehouses. Many have succeeded, but many have also failed. These failures are considered mostly to be of an organisational nature and not of a technological, as one might have expected. Due to the failures, organisations have problems to derive business value from their data warehouse investments. Obtaining business value from data warehouses is necessary, since the investment is of such a magnitude that it is clearly visible in the balance sheet. In order to investigate how the business value may be increased, we have conducted an extensive literature study, aimed at identifying opportunities and future directions, which may alleviate the problem of low return on investment. To balance the work, we have also identified pitfalls, which may hinder organisations to derive business value from their data warehouses.

Based on the literature survey, we have identified and motivated possible research areas, which we consider relevant if organisations are to derive real business value from their data warehouses. These areas are:

* Integrating data warehouses in knowledge management.

* Data warehouses as a foundation for information data super stores.

* Using data warehouses to predict the need for business change.

* Aligning data warehouses and business processes.

As the areas are rather broad, we have also included examples of more specific research problems, within each possible research area. Furthermore, we have given initial ideas regarding how to investigate those specific research problems.

APA, Harvard, Vancouver, ISO, and other styles
31

ARRUDA, MARCELO MEDEIROS. "VISUALIZATION OF SEISMIC VOLUMETRIC DATE USING A DIRECTIONAL OCCLUSION SHADING MODEL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21391@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A interpretação de dados sísmicos é de fundamental importância para a industria de óleo e gás. Uma vez que esses tipos de dados possuem um caráter volumétrico, não é tão simples se identificar e selecionar atributos presentes em sua estrutura 3D. Além disso, a grande presença de ruídos e concavidades acentuadas nesse tipo de dado aumenta a complexidade de sua manipulação e visualização. Devido a essas características, a geometria do dado é muito complexa, sendo necessários modelos de iluminação mais realísticos para realizar a iluminação do volume sísmico. Este trabalho consiste em realizar a visualização volumétrica de dados sísmicos baseada no algoritmo de traçado de raios, utilizando um modelo de iluminação por oclusão direcional, calculando a contribuição de luz ambiente que chega a cada elemento do volume. Desta forma, conseguimos realçar a geometria do dado sísmico, sobretudo onde as concavidades e falhas são mais acentuadas. O algoritmo proposto foi inteiramente implementado em placa gráfica, permitindo manipulação a taxas interativas, sem a necessidade de pré-processamento.
The interpretation of seismic volumetric data has a major importance for the oil and gas industry. Since these data types have a volumetric character mode, identify and select attributes present in this struct become a difficult task. Furthemore, the high-frequecy noise and depth information typically found in this type of data, increasesthe complexity of their manipulation and visualization. Due to these characteristics, the geometry of 3D sismic data is very complexy and is necessary more realistic light model to perfom the illumnination of the seismic volume. This work consists of performing a volumetric visualization of seismic data based on ray tracing algorithm, using an illumination model by directional occlusion, computing the ambiente light attenuated by the elements in the light trajetory for all elements in the volume. Thus, we emphasize the geometry of the seismic data, especially the depth cues and spatial relationship. The proposed algorithm was fully implemented on graphics card, allowing at interactive rates, without any pre-processing.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Myung Hee Marron James Stephen. "Continuum direction vectors in high dimensional low sample size data." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,1132.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Mar. 27, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research Statistics." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
APA, Harvard, Vancouver, ISO, and other styles
33

彭運佳 and Wan-kai Pang. "Time series analysis of meteorological data: wind speed and direction." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B30425979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pang, Wan-kai. "Time series analysis of meteorological data : wind speed and direction /." [Hong Kong] : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13456933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Huiyong. "Enhancing Students' Self-Direction Skill with Learning and Physical Activity Data." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Schintler, Laurie A., and Manfred M. Fischer. "Big Data and Regional Science: Opportunities, Challenges, and Directions for Future Research." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6122/1/Fischer_etal_2018_Big%2Ddata.pdf.

Full text
Abstract:
Recent technological, social, and economic trends and transformations are contributing to the production of what is usually referred to as Big Data. Big Data, which is typically defined by four dimensions -- Volume, Velocity, Veracity, and Variety -- changes the methods and tactics for using, analyzing, and interpreting data, requiring new approaches for data provenance, data processing, data analysis and modeling, and knowledge representation. The use and analysis of Big Data involves several distinct stages from "data acquisition and recording" over "information extraction" and "data integration" to "data modeling and analysis" and "interpretation", each of which introduces challenges that need to be addressed. There also are cross-cutting challenges, which are common challenges that underlie many, sometimes all, of the stages of the data analysis pipeline. These relate to "heterogeneity", "uncertainty", "scale", "timeliness", "privacy" and "human interaction". Using the Big Data analysis pipeline as a guiding framework, this paper examines the challenges arising in the use of Big Data in regional science. The paper concludes with some suggestions for future activities to realize the possibilities and potential for Big Data in regional science.
Series: Working Papers in Regional Science
APA, Harvard, Vancouver, ISO, and other styles
37

Parr, Bouberima Wafia. "Modèles de mélange de von Mises-Fisher." Phd thesis, Université René Descartes - Paris V, 2013. http://tel.archives-ouvertes.fr/tel-00987196.

Full text
Abstract:
Dans la vie actuelle, les données directionnelles sont présentes dans la majorité des domaines, sous plusieurs formes, différents aspects et de grandes tailles/dimensions, d'où le besoin de méthodes d'étude efficaces des problématiques posées dans ce domaine. Pour aborder le problème de la classification automatique, l'approche probabiliste est devenue une approche classique, reposant sur l'idée simple : étant donné que les g classes sont différentes entre elles, on suppose que chacune suit une loi de probabilité connue, dont les paramètres sont en général différents d'une classe à une autre; on parle alors de modèle de mélange de lois de probabilités. Sous cette hypothèse, les données initiales sont considérées comme un échantillon d'une variable aléatoire d-dimensionnelle dont la densité est un mélange de g distributions de probabilités spécifiques à chaque classe. Dans cette thèse nous nous sommes intéressés à la classification automatique de données directionnelles, en utilisant des méthodes de classification les mieux adaptées sous deux approches: géométrique et probabiliste. Dans la première, en explorant et comparant des algorithmes de type kmeans; dans la seconde, en s'attaquant directement à l'estimation des paramètres à partir desquels se déduit une partition à travers la maximisation de la log-vraisemblance, représentée par l'algorithme EM. Pour cette dernière approche, nous avons repris le modèle de mélange de distributions de von Mises-Fisher, nous avons proposé des variantes de l'algorithme EMvMF, soit CEMvMF, le SEMvMF et le SAEMvMF, dans le même contexte, nous avons traité le problème de recherche du nombre de composants et le choix du modèle de mélange, ceci en utilisant quelques critères d'information : Bic, Aic, Aic3, Aic4, Aicc, Aicu, Caic, Clc, Icl-Bic, Ll, Icl, Awe. Nous terminons notre étude par une comparaison du modèle vMF avec un modèle exponentiel plus simple ; à l'origine ce modèle part du principe que l'ensemble des données est distribué sur une hypersphère de rayon ρ prédéfini, supérieur ou égal à un. Nous proposons une amélioration du modèle exponentiel qui sera basé sur une étape estimation du rayon ρ au cours de l'algorithme NEM. Ceci nous a permis dans la plupart de nos applications de trouver de meilleurs résultats; en proposant de nouvelles variantes de l'algorithme NEM qui sont le NEMρ , NCEMρ et le NSEMρ. L'expérimentation des algorithmes proposés dans ce travail a été faite sur une variété de données textuelles, de données génétiques et de données simulées suivant le modèle de von Mises-Fisher (vMF). Ces applications nous ont permis une meilleure compréhension des différentes approches étudiées le long de cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
38

Parr, Bouberima Wafia. "Modèles de mélange de von Mises-Fisher." Electronic Thesis or Diss., Paris 5, 2013. http://www.theses.fr/2013PA05S028.

Full text
Abstract:
Dans la vie actuelle, les données directionnelles sont présentes dans la majorité des domaines, sous plusieurs formes, différents aspects et de grandes tailles/dimensions, d'où le besoin de méthodes d'étude efficaces des problématiques posées dans ce domaine. Pour aborder le problème de la classification automatique, l'approche probabiliste est devenue une approche classique, reposant sur l'idée simple : étant donné que les g classes sont différentes entre elles, on suppose que chacune suit une loi de probabilité connue, dont les paramètres sont en général différents d'une classe à une autre; on parle alors de modèle de mélange de lois de probabilités. Sous cette hypothèse, les données initiales sont considérées comme un échantillon d'une variable aléatoire d-dimensionnelle dont la densité est un mélange de g distributions de probabilités spécifiques à chaque classe. Dans cette thèse nous nous sommes intéressés à la classification automatique de données directionnelles, en utilisant des méthodes de classification les mieux adaptées sous deux approches: géométrique et probabiliste. Dans la première, en explorant et comparant des algorithmes de type kmeans; dans la seconde, en s'attaquant directement à l'estimation des paramètres à partir desquels se déduit une partition à travers la maximisation de la log-vraisemblance, représentée par l'algorithme EM. Pour cette dernière approche, nous avons repris le modèle de mélange de distributions de von Mises-Fisher, nous avons proposé des variantes de l'algorithme EMvMF, soit CEMvMF, le SEMvMF et le SAEMvMF, dans le même contexte, nous avons traité le problème de recherche du nombre de composants et le choix du modèle de mélange, ceci en utilisant quelques critères d'information : Bic, Aic, Aic3, Aic4, Aicc, Aicu, Caic, Clc, Icl-Bic, Ll, Icl, Awe. Nous terminons notre étude par une comparaison du modèle vMF avec un modèle exponentiel plus simple ; à l'origine ce modèle part du principe que l'ensemble des données est distribué sur une hypersphère de rayon ρ prédéfini, supérieur ou égal à un. Nous proposons une amélioration du modèle exponentiel qui sera basé sur une étape estimation du rayon ρ au cours de l'algorithme NEM. Ceci nous a permis dans la plupart de nos applications de trouver de meilleurs résultats; en proposant de nouvelles variantes de l'algorithme NEM qui sont le NEMρ , NCEMρ et le NSEMρ. L'expérimentation des algorithmes proposés dans ce travail a été faite sur une variété de données textuelles, de données génétiques et de données simulées suivant le modèle de von Mises-Fisher (vMF). Ces applications nous ont permis une meilleure compréhension des différentes approches étudiées le long de cette thèse
In contemporary life directional data are present in most areas, in several forms, aspects and large sizes / dimensions; hence the need for effective methods of studying the existing problems in these fields. To solve the problem of clustering, the probabilistic approach has become a classic approach, based on the simple idea: since the g classes are different from each other, it is assumed that each class follows a distribution of probability, whose parameters are generally different from one class to another. We are concerned here with mixture modelling. Under this assumption, the initial data are considered as a sample of a d-dimensional random variable whose density is a mixture of g distributions of probability where each one is specific to a class. In this thesis we are interested in the clustering of directional data that has been treated using known classification methods which are the most appropriate for this case. In which both approaches the geometric and the probabilistic one have been considered. In the first, some kmeans like algorithms have been explored and considered. In the second, by directly handling the estimation of parameters from which is deduced the partition maximizing the log-likelihood, this approach is represented by the EM algorithm. For the latter approach, model mixtures of distributions of von Mises-Fisher have been used, proposing variants of the EM algorithm: EMvMF, the CEMvMF, the SEMvMF and the SAEMvMF. In the same context, the problem of finding the number of the components in the mixture and the choice of the model, using some information criteria {Bic, Aic, Aic3, Aic4, AICC, AICU, CAIC, Clc, Icl-Bic, LI, Icl, Awe} have been discussed. The study concludes with a comparison of the used vMF model with a simpler exponential model. In the latter, it is assumed that all data are distributed on a hypersphere of a predetermined radius greater than one, instead of a unit hypersphere in the case of the vMF model. An improvement of this method based on the estimation step of the radius in the algorithm NEMρ has been proposed: this allowed us in most of our applications to find the best partitions; we have developed also the NCEMρ and NSEMρ algorithms. The algorithms proposed in this work were performed on a variety of textual data, genetic data and simulated data according to the vMF model; these applications gave us a better understanding of the different studied approaches throughout this thesis
APA, Harvard, Vancouver, ISO, and other styles
39

Chornopyska, N. V., A. I. Popovych, Н. В. Чорнописька, and А. І. Попович. "Logistics & supply chain management: up-to-date research directions." Thesis, National Aviation University, 2022. https://er.nau.edu.ua/handle/NAU/54833.

Full text
Abstract:
The examples of events that led to the Ripple effect in unstable, uncertain, complex, and ambiguous circumstances show how the environment is changing rapidly and requires innovative research with predictive values to help supply chains respond to change.
Приклади подій, які призвели до ефекту Ripple в нестабільних, невизначених, складних і неоднозначних обставинах, показують, як навколишнє середовище швидко змінюється і потребує інноваційних досліджень з прогнозними значеннями, щоб допомогти ланцюгам поставок реагувати на зміни.
APA, Harvard, Vancouver, ISO, and other styles
40

Burintramart, Santana. "Methods for direction of arrival estimation using a single snapshot of the data." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2009. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Keche, Mokhtar. "Data association and adaptive filtering in multiple target tracking using phased arrays." Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Suteu, C. A., Catherine M. Batt, and I. Zananiri. "New developments in archaeomagnetic dating for Romania - A progress report on recent directional studies." Elsevier, 2008. http://hdl.handle.net/10454/4668.

Full text
Abstract:
no
This project seeks to address the lack of geomagnetic field data for the territory of Romania by sampling and analysing burnt archaeological features and sediments. The aim of this paper is to present the initial directional results and some magnetic mineralogical determinations from five features sampled during the first field season. Representative examples of directional and magnetic mineralogical analyses are presented, and dates are obtained using the REN-DATE software [Lanos, P., Kovacheva, M., Chauvin, A., 1999. Archaeomagnetism, methodology and applications: implementation and practice of the archaeomagnetic method in France and Bulgaria. Journal of European Archaeology, 2, 365¿392] and the published moving window averaged data from Hungary [Ma´rton, P., 2003. Recent achievements in archaeomagnetism in Hungary. Geophysical Journal International 153(3), 675¿690]. A comparison is made of the data obtained in this study with the published directional data from Bulgaria, Hungary and Ukraine.
APA, Harvard, Vancouver, ISO, and other styles
43

XIA, QI. "Sufficient Dimension Reduction with Missing Data." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/469880.

Full text
Abstract:
Statistics
Ph.D.
Existing sufficient dimension reduction (SDR) methods typically consider cases with no missing data. The dissertation aims to propose methods to facilitate the SDR methods when the response can be missing. The first part of the dissertation focuses on the seminal sliced inverse regression (SIR) approach proposed by Li (1991). We show that missing responses generally affect the validity of the inverse regressions under the mechanism of missing at random. We then propose a simple and effective adjustment with inverse probability weighting that guarantees the validity of SIR. Furthermore, a marginal coordinate test is introduced for this adjusted estimator. The proposed method share the simplicity of SIR and requires the linear conditional mean assumption. The second part of the dissertation proposes two new estimating equation procedures: the complete case estimating equation approach and the inverse probability weighted estimating equation approach. The two approaches are applied to a family of dimension reduction methods, which includes ordinary least squares, principal Hessian directions, and SIR. By solving the estimating equations, the two approaches are able to avoid the common assumptions in the SDR literature, the linear conditional mean assumption, and the constant conditional variance assumption. For all the aforementioned methods, the asymptotic properties are established, and their superb finite sample performances are demonstrated through extensive numerical studies as well as a real data analysis. In addition, existing estimators of the central mean space have uneven performances across different types of link functions. To address this limitation, a new hybrid SDR estimator is proposed that successfully recovers the central mean space for a wide range of link functions. Based on the new hybrid estimator, we further study the order determination procedure and the marginal coordinate test. The superior performance of the hybrid estimator over existing methods is demonstrated in simulation studies. Note that the proposed procedures dealing with the missing response at random can be simply adapted to this hybrid method.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
44

Hibbert, Michael Patrick. "The development of a solid state wind velocity and direction indicator, suitable for data logging." Thesis, Cape Technikon, 1992. http://hdl.handle.net/20.500.11838/1117.

Full text
Abstract:
Thesis (Masters Diploma (Electrical Engineering)) -- Cape Technikon, Cape Town,1992
This thesis describes the development of a free standing, maintenance free anemometer which has no rotating parts. The principle of operation is based on the \-vind drag/force around a hollow P.V.c. pipe. The aim is to demonstrate how the strain occurring in the P.V.C. pipe, due to wind drag/force acting on it, can generate an electrical signal which can be mathematically manipulated to determine wind velocity and wind bearing.
APA, Harvard, Vancouver, ISO, and other styles
45

Mattsson, Johansson Elna. "Design Directions for Supporting Implicit Interactions in a Market Surveillance System." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300188.

Full text
Abstract:
Enterprise systems are built for companies and used by the employees to complete work tasks. Focus on userdriven designs for consumer technology has led to expectations of user-friendly designs. Enterprise technology tends, however, to be more technology-driven rather than user-driven, creating unmatched expectations and mismatch between end-user and company objectives. This is why it is necessary to also consider enterprise systems from a user-driven perspective. Therefore, this study addresses user-driven enterprise designs through the Implicit Interaction Framework using a market surveillance system (MSS) as a case study. Practical design implementations and insights were gained through Research through Design (RtD), which were obtained from a survey to validate potential problems, mapping activities using the framework to gain design insights, and prototyped wireframes expressed through narrative video scenarios and evaluated with UX professionals to identify design directions. Three design directions were identified: Recall: Actions for Reminding, Collaboration: Anticipation of Intention, and Disruption: Supporting Ongoing State-Shifting. Control comes at the cost of disruption or risking wrongful actions, context of implicitness creates a trade-off between cognitive load and risk of errors, and lastly UX professionals might have to balance competing objectives in a situation where they collide. Furthermore, the Implicit Interaction Framework can guide enterprise UX designers and researchers to understand the interplay and interactions occurring between system and end-user. However, it is a translation where the complexity of enterprise systems is in some respects difficult to demonstrate, where better end-user experiences through implicit interactions should not be assumed.
Företagssystem är byggda för företag och används av de anställda för att slutföra arbetsuppgifter. Fokus på användardriven design inom konsumentteknik har lett till förväntningar på användarvänliga designer. Företagssystem tenderar dock att vara mer teknologidriven snarare än användardriven, vilket skapar oöverträffade förväntningar och oöverensstämmelse mellan slutanvändarnas och företagets mål. Det är därför nödvändigt att också betrakta företagssystem från ett användardrivet perspektiv. Därför behandlar den här studien användardrivna företagsdesigner genom ramverket ”Implicit Interaction Framework” där ett marknadsövervakningssystem ”market surveillance system” (MSS) används som fallstudie. Praktiska designimplementeringar och insikter nåddes genom Research through Design (RtD), som erhölls från en enkät för att validera potentiella problem, kartläggningsaktiviteter för att få designinsikter, och prototyper framhävda genom videoscenarier med berättarröst och som utvärderas med UX-yrkesverksamma personer för att identifiera designriktningar. Tre designriktningar identifierades: Komma ihåg: Åtgärder för att Påminna, Samverkan: Förväntan på Avsikt, och Avbrott: Stöd för Pågående Tillståndsändring. Kontroll har sitt pris genom avbrott eller risk för felaktiga handlingar, sammanhanget för implicititet skapar en avvägning mellan kognitiv belastning och risk för fel, och slutligen UX-yrkesverksamma kan behöva balansera konkurrerande mål i en situation där de kolliderar. Dessutom kan Implicit Interaction Framework vägleda UX-designers och forskare för att förstå samspelet och interaktionerna mellan system och slutanvändare. Det är dock en översättning där komplexiteten i företagssystem i vissa avseenden är svår att demonstrera, där bättre slutanvändarupplevelser genom implicita interaktioner inte bör antas.
APA, Harvard, Vancouver, ISO, and other styles
46

Atemkeng, Marcellin T. "Data compression, field of interest shaping and fast algorithms for direction-dependent deconvolution in radio interferometry." Thesis, Rhodes University, 2017. http://hdl.handle.net/10962/6324.

Full text
Abstract:
In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities “decorrelate”, and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as “smearing”, which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. Averaging also results in baseline length and a position-dependent point spread function (PSF). In this work, we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be understood as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. Specifically, we can improve amplitude response over a chosen field of interest and attenuate sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Jansky Very Large Array and the European Very Long Baseline Interferometry Network. Furthermore, we show that the position-dependent PSF shape induced by averaging can be approximated using linear algebraic properties to effectively reduce the computational complexity for evaluating the PSF at each sky position. We conclude by implementing a position-dependent PSF deconvolution in an imaging and deconvolution framework. Using the Low-Frequency Array radio interferometer, we show that deconvolution with position-dependent PSFs results in higher image fidelity compared to a simple CLEAN algorithm and its derivatives.
APA, Harvard, Vancouver, ISO, and other styles
47

Gauthier, Jérôme. "Analyse de signaux et d'images par bancs de filtres : applications aux géosciences." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00331238.

Full text
Abstract:
Afin de réaliser des traitements locaux sur des données de diverses natures (volumes, images ou signaux) contenant des éléments informatifs dans certaines bandes de fréquence, nous nous intéressons dans cette thèse à l'étude de bancs de filtres (BdF). Plus précisément, nous étudions l'existence et la synthèse de BdF de réponses impulsionnelles finies (RIF) inverses d'un BdF d'analyse RIF redondant complexe fixé. Nous proposons en particulier des méthodes testant l'inversibilité de la matrice d'analyse et la construction d'un inverse explicite à l'aide de la formulation polyphase. À partir de ce dernier, nous proposons une paramétrisation réduite de l'ensemble des BdF de synthèse permettant d'optimiser leurs réponses selon différents critères. Cette étude est étendue au cas multidimensionnel notamment par l'utilisation de la notion de résultant. Ces outils permettant de représenter efficacement certaines informations structurées dans des données, il devient possible de les préserver tout en rejetant d'éventuelles perturbations. Le premier cadre considéré est celui du bruit gaussien. Nous avons utilisé le principe de Stein pour proposer deux méthodes de débruitage : FB-SURELET-E et FBSURELET-C. Elles sont comparées à des méthodes récentes de débruitage conduisant à de bons résultats en particulier pour des images texturées. Un autre type d'application est ensuite considéré : la séparation des structures orientées. Afin de traiter ce problème, nous avons développé une méthode de filtrage anisotrope. Les algorithmes réalisés sont finalement testés sur des données issues de différents domaines applicatifs (sismique, microscopie, vibrations)
APA, Harvard, Vancouver, ISO, and other styles
48

Hogler, Marcus. "Comparing head- and eye direction and accuracy during smooth pursuit in an augmented reality environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254999.

Full text
Abstract:
Smooth pursuit is the movement that occurs when the eyes meticulously follow an object in motion. While smooth pursuit can be achieved with a stationary head, it generally relies on the head following the visual target as well. During smooth pursuit, a coordinating vestibular mechanism, shared by both the head and the eyes, is used. Therefore, smooth pursuit can reveal much about where a person is looking based on only the direction of the head. To investigate the interplay between the eyes and the head, an application was made for the augmented reality head-mounted display Magic Leap. The application gathered data of the head and eyes respective movements. The data was analyzed using visualizations to find relationships within the eye-head coordination. User studies were conducted and the eyes proved to be incredibly accurate and the head direction was close to the target at all times. The results point towards the possibility of using head direction as a model for visual attention in the shape of a cone. The users’ head direction was a good indicator of where they put their attention, making it a valuable tool for developing augmented reality applications for head-mounted displays and smart glasses. By only using head direction, a software developer can measure where most of the users’ attention is put and hence optimize the application according to this information.
Följerörelser är det som sker när ögonen noggrant följer ett objekt i rörelse. Följerörelser kan uppnås med ett stationärt huvud, men generellt används även huvudet för att följa det visuella målet. Ögonen och huvudet delar en vestibulär koordineringsmekanism som är aktiv under följerörelser och därför kan enbart huvudrörelser avslöja mycket om var en person har sin uppmärksamhet.För att undersöka samspelet mellan ögonen och huvudet gjordes en applikation för ett augmented reality headsetet Magic Leap. Applikationen samlade in data på ögonrespektive huvudrörelser. Den insamlade datan analyserades med hjälp av visualiseringar för att hitta förhållanden inom ögon-huvud koordinationen.Användarstudier utfördes och ögonen visade sig vara väldigt exakta och huvudets riktning var hela tiden i närheten av målet. Resultatet pekar mot möjligheten att använda huvudriktning som en modell för visuell uppmärksamhet i formen av en kon. Användarnas huvudriktning var en bra indikator på var de hade sin uppmärksamhet, vilket gör den till ett användbart verktyg för utveckling av augmented reality applikationer för headsets och smartglasögon. En mjukvaruutvecklare kan mäta var användarnas uppmärksamhet dras genom att använda huvudriktningen och kan därmed optimera applikationen utefter den informationen.
APA, Harvard, Vancouver, ISO, and other styles
49

Wolf, Jordan Taylor. "Trending in the Right Direction: Using Google Trends Data as a Measure of Public Opinion During a Presidential Election." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83571.

Full text
Abstract:
During the 2016 presidential election, public opinion polls consistently showed a lead in the popular vote and Electoral College for Hillary Clinton over Donald Trump. Following Trump's surprise victory, the political pundits and public at large began to question the accuracy of modern public opinion polling. Fielding a representative sample, convoluted and opaque methodologies, the sheer amount of polls, and both the media's and general public's inability to interpret poll results are among the flaws of the polling industry. An alternative or supplement to traditional polling practices is necessary. This thesis seeks to investigate whether Google Trends can be effectively used as a measure of public opinion during presidential elections. This study gathers polling data from the 2016 presidential election from states that were considered swing states. Specifically, this study examines six total polls, three from states that swung in the way the polls predicted they would – Nevada and Virginia – and three from states that swung against the prediction – Michigan, Wisconsin, and Pennsylvania. Answers to the "Most Important Issue" question in each poll are compared to their corresponding topics in Google Trends by calculating Pearson product moment correlations for each pair. Results indicated that in states that swung as predicted, Google Trends was an effective supplement to traditional public opinion polls. In states that did not swing as predicted, Google Trends was not an effective supplement. Implications of these results and future considerations for the polling industry and Google are discussed.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
50

Babu, Prabhu. "Spectral Analysis of Nonuniformly Sampled Data and Applications." Doctoral thesis, Uppsala universitet, Avdelningen för systemteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180391.

Full text
Abstract:
Signal acquisition, signal reconstruction and analysis of spectrum of the signal are the three most important steps in signal processing and they are found in almost all of the modern day hardware. In most of the signal processing hardware, the signal of interest is sampled at uniform intervals satisfying some conditions like Nyquist rate. However, in some cases the privilege of having uniformly sampled data is lost due to some constraints on the hardware resources. In this thesis an important problem of signal reconstruction and spectral analysis from nonuniformly sampled data is addressed and a variety of methods are presented. The proposed methods are tested via numerical experiments on both artificial and real-life data sets. The thesis starts with a brief review of methods available in the literature for signal reconstruction and spectral analysis from non uniformly sampled data. The methods discussed in the thesis are classified into two broad categories - dense and sparse methods, the classification is based on the kind of spectra for which they are applicable. Under dense spectral methods the main contribution of the thesis is a non-parametric approach named LIMES, which recovers the smooth spectrum from non uniformly sampled data. Apart from recovering the spectrum, LIMES also gives an estimate of the covariance matrix. Under sparse methods the two main contributions are methods named SPICE and LIKES - both of them are user parameter free sparse estimation methods applicable for line spectral estimation. The other important contributions are extensions of SPICE and LIKES to multivariate time series and array processing models, and a solution to the grid selection problem in sparse estimation of spectral-line parameters. The third and final part of the thesis contains applications of the methods discussed in the thesis to the problem of radial velocity data analysis for exoplanet detection. Apart from the exoplanet application, an application based on Sudoku, which is related to sparse parameter estimation, is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography