Thèses sur le sujet « Methods of representation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Methods of representation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Methods of representation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Chang, William. « Representation Theoretical Methods in Image Processing ». Scholarship @ Claremont, 2004. https://scholarship.claremont.edu/hmc_theses/160.

Texte intégral
Résumé :
Image processing refers to the various operations performed on pictures that are digitally stored as an aggregate of pixels. One can enhance or degrade the quality of an image, artistically transform the image, or even find or recognize objects within the image. This paper is concerned with image processing, but in a very mathematical perspective, involving representation theory. The approach traces back to Cooley and Tukey’s seminal paper on the Fast Fourier Transform (FFT) algorithm (1965). Recently, there has been a resurgence in investigating algebraic generalizations of this original algorithm with respect to different symmetry groups. My approach in the following chapters is as follows. First, I will give necessary tools from representation theory to explain how to generalize the Discrete Fourier Transform (DFT). Second, I will introduce wreath products and their application to images. Third, I will show some results from applying some elementary filters and compression methods to spectrums of images. Fourth, I will attempt to generalize my method to noncyclic wreath product transforms and apply it to images and three-dimensional geometries.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Henrysson, Anders. « Procedural Media Representation ». Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1220.

Texte intégral
Résumé :

We present a concept for using procedural techniques to represent media. Procedural methods allow us to represent digital media (2D images, 3D environments etc.) with very little information and to render it photo realistically. Since not all kind of content can be created procedurally, traditional media representations (bitmaps, polygons etc.) must be used as well. We have adopted an object-based media representation where an object can be represented either with a procedure or with its traditional representation. Since the objects are created on the client the procedures can be adapted to its properties such as screen resolution and rendering performance. To keep the application as small and flexible as possible, each procedure is implemented as a library which is only loaded when needed. The media representation iswritten in XML to make it human readable and easy editable. The application is document driven where the content of the XML document determines which libraries to be loaded. The media objects resulting from the procedures is composited into the media representation preferred by the renderer together with the non-procedural objects. The parameters in the XML document are relative to parameters determined by the system properties (resolution, performance etc.) and hence adapt the procedures to the client. By mapping objects to individual libraries, the architecture is easy to make multi threaded and/or distributed.

Styles APA, Harvard, Vancouver, ISO, etc.
3

Tveit, Amund. « Customizing Cyberspace : Methods for User Representation and Prediction ». Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1605.

Texte intégral
Résumé :

Cyberspace plays an increasingly important role in people’s life due to its plentiful offering of services and information, e.g. the Word Wide Web, the Mobile Web and Online Games. However, the usability of cyberspace services is frequently reduced by its lack of customization according to individual needs and preferences.

In this thesis we address the cyberspace customization issue by focusing on methods for user representation and prediction. Examples of cyberspace customization include delegation of user data and tasks to software agents, automatic pre-fetching, or pre-processing of service content based on predictions. The cyberspace service types primarily investigated are Mobile Commerce (e.g. news, finance and games) and Massively Multiplayer Online Games (MMOGs).

First a conceptual software agent architecture for supporting users of mobile commerce services will be presented, including a peer-to-peer based collaborative filtering extension to support product and service recommendations.

In order to examine the scalability of the proposed conceptual software agent architecture a simulator for MMOGs is developed. Due to their size and complexity, MMOGs can provide an estimated “upper bound” for the performance requirements of other cyberspace services using similar agent architectures.

Prediction of cyberspace user behaviour is considered to be a classification problem, and because of the large and continuously changing nature of cyberspace services there is a need for scalable classifiers. This is handled by proposed classifiers that are incrementally trainable, support a large number of classes, and supports efficient decremental untraining of outdated classification knowledge, and are efficiently parallelized in order to scale well.

Finally the incremental classifier is empirically compared with existing classifiers on: 1) general classification data sets, 2) user clickstreams from an actual web usage log, and 3) a synthetic game usage log from the developed MMOG simulator. The proposed incremental classifier is shown to an order of magnitude faster than the other classifiers, significantly more accurate than the naive bayes classifier on the selected data sets, and with insignificantly different accuracy from the other classifiers.

The papers leading to this thesis have combined been cited more than 50 times in book, journal, magazine, conference, workshop, thesis, whitepaper and technical report publications at research events and universities in 20 countries. 2 of the papers have been applied in educational settings for university courses in Canada, Finland, France, Germany, Norway, Sweden and USA.

Styles APA, Harvard, Vancouver, ISO, etc.
4

Jackson, Todd Robert. « Analysis of functionally graded material object representation methods ». Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9032.

Texte intégral
Résumé :
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 2000.
Includes bibliographical references (leaves 218-224).
Solid Freeform Fabrication (SFF) processes have demonstrated the ability to produce parts with locally controlled composition. To exploit this potential, methods to represent and exchange parts with varying local composition need to be proposed and evaluated. In modeling such parts efficiently, any such method should provide a concise and accurate description of all of the relevant information about the part with minimal cost in terms of storage. To address these issues, several approaches to modeling Functionally Graded Material (FGM) objects are evaluated based on their memory requirements. Through this research, an information pathway for processing FGM objects based on image processing is proposed. This pathway establishes a clear separation between design of FGM objects, their processing, and their fabrication. Similar to how an image is represented by a continuous vector valued function of the intensity of the primary colors over a two-dimensional space, an FGM object is represented by a vector valued function spanning a Material Space, defined over the three dimensional Build Space. Therefore, the Model Space for FGM objects consists of a Build Space and a Material Space. The task of modeling and designing an FGM object, therefore, is simply to accurately represent the function m(x) where x E Build Space. Data structures for representing FGM objects are then described and analyzed, including a voxel based structure, finite element method, and the extension of the Radial-Edge and Cell-Tuple-Graph data structures mains in order to represent spatially varying properties. All of the methods are capable of defining the function m(x) but each does so in a different way. Along with introducing each data structure, the storage cost for each is derived in terms of the number of instances of each of its fundamental classes required to represent an object. In order to determine the optimal data structure to model FGM objects, the storage cost associated with each data structure for representing several hypothetical models is calculated. Although these models are simple in nature, their curved geometries and regions of both piece-wise constant and non-linearly graded compositions reflect the features expected to be found in real applications. In each case, the generalized cellular methods are found to be optimal, accurately representing the intended design.
by Todd Robert Jackson.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Harutunian, Vigain. « Representation methods for an axiomatic design process software ». Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/39768.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cureton, Paul. « Drawing in landscape architecture : fieldwork, poetics, methods, translation and representation ». Thesis, Manchester Metropolitan University, 2014. http://e-space.mmu.ac.uk/580030/.

Texte intégral
Résumé :
By analysing landscape architectural representation, particularly drawing, the thesis contribution will develop the mode and process of making - poesis: between production and representation. Extending the work of James Corner on drawing within landscape architecture (1992), the thesis will develop a positive hermeneutics from the novelist Italo Calvino (1997) in which this agency of drawing can be understood and conceived. From this framework of operation, a number of drawing methods are to be developed - particularly heuristics and scoring which creates a positive valence for landscape architectural production. The focus will lie within the process or translation of drawing into landscape, or its process of ‘becoming’ (Vesely 2006, Evans 1996, 2000, Deleuze 1992). This focus will be contextualised amongst others by the work of: Paolo Soleri (1919- 2013), Wolf Hilbertz (1938-2007) and Lawrence Halprin (1916-2009). The agency of drawing is to be situated in broader theories of space and ‘everyday life’ particularly by extracting critical neo-Marxist notions and readings of social productions of space as found in Henri Lefebvre (1901 -1991) (De Certeau 1984, 1998, Lefebvre, 1991, 1996, 2003, Soja 1996, 2000 & Harvey 1989. 1996). The thesis contribution to knowledge will thus chart drawing use, communication, alternative strategies, and new concepts of urban environments; a ‘poetic mediation on existence’ (Kundera 1987). This very movement & ‘becoming’ whilst containing analysis, in each separate component, has yet to be collectively discussed in a constructive and meaningful way. This inturn will reflect back on the role of representation in the shaping and conception of space – this is the role of drawing in landscape architecture. This knowledge is enabled using methods of interdisciplinary exhibition, educational modules, oral history interviews and the history of professional landscape architecture practices, as well by deploying a visual literacy method within the thesis (Dee 2001, 2004).
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhou, Ying Fu. « A study for orbit representation and simplified orbit determination methods ». Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15895/1/Ying_Fu_Zhou_Thesis.pdf.

Texte intégral
Résumé :
This research effort is concerned with the methods of simplified orbit determination and orbit representation and their applications for Low Earth Orbit (LEO) satellite missions, particularly addressing the operational needs of the FedSat mission. FedSat is the first Australian-built satellite in over thirty years. The microsatellite is approximately 50cm cubed with a mass of 58 kg. The satellite was successfully placed into a low-earth near-polar orbit at an altitude of 780km by the Japanese National Space Development Agency (NASDA) H-IIA launch vehicle on 14, December 2002. Since then, it has been streaming scientific data to its ground station in Adelaide almost daily. This information is used by Australian and international researchers to study space weather, to help improve the design of space computers, communication systems and other satellite technology, and for research into navigation and satellite tracking. This research effort addresses four practical issues regarding the FedSat mission and operations. First, unlike most satellite missions, the GPS receiver onboard FedSat operates in a duty-cycle mode due to the limitations of the FedSat power supply. This causes significant difficulties for orbit tracking, Precise Orbit Determination and scientific applications. A covariance analysis was performed before the mission launch to assess the orbit performance under different operational modes. The thesis presents the analysis methods and results. Second, FedSat supports Ka-band tracking experiments that require a pointing accuracy of 0.03 degree. The QUT GPS group is obligated to provide the GPS precise orbit solution to meet this requirement. Ka-band tracking requests satellite orbital position at any instant time with respect to any of the observation stations. Because orbit determination and prediction software only provide satellite orbital data at a discrete time point, it is necessary to find a way to represent the satellite orbit as a continuous trajectory with discrete observation data, able to obtain the position of the satellite at the time of interest. For this purpose, an orbit interpolation algorithm using the Chebyshev polynomial was developed and applied to Ka-band tracking applications. The thesis will describe the software and results. Third, since the launch of FedSat, investigators have received much flight GPS data. Some research was invested in the analysis of FedSat orbit performance, GPS data quality and the quality of the onboard navigation solutions. Studies have revealed that there are many gross errors in the FedSat onboard navigation solution (ONS). Although the 1-sigma accuracy of each component is about 20 m, there are more than 11 %positioning errors that fall outside +/-50m, and 5% of the errors are outside the 100mbound. The 3D RMS values would be 35m, 87m, and 173m for the above three cases respectively. The FedSat ONS uncertainties are believed to be approximately three times greater than those from other satellite missions. Due to the high percentage of outlier solutions, it would be dangerous to use these without first applying data detection and exclusion procedures. Therefore, this thesis presents two simplified orbit determination methods that can improve the ONS. One is the "geometric method", which makes use of delta-position solutions derived from carrier phase differences between two epochs to smooth the code-based navigation solutions. The algorithms were tested using SAC-C GPS data and showing some improvement. The second method is the "dynamic method", which uses orbit dynamics information for orbit improvements. Fourth, the FedSat ground tracking team at Adelaide use the NORAD TLE orbit for daily FedSat tracking. Research was undertaken to convert an orbit trajectory into these Two Line Elements (TLE). Algorithms for the estimation of TLE solutions from the FedSat onboard GPS navigation solutions are outlined. Numerical results have shown the effects of the unmodelled forces/perturbations in the SPG4 models for the FedSat orbit determination would reach a level of ±1000m. This only includes the orbit representation errors with TLE data sets. The total FedSat orbit propagation should include both the orbit propagation and orbit representation terms. The analysis also demonstrates that the orbit presentation error can be reduced to ±200m and ±100mlevels with the EGM4x4 and EGM10x10 gravity field models respectively. This can meet the requirements for Ka-band tracking. However, a simplified tracking program based on numerical integration has to be developed to replace the SPG4 models.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zhou, Ying Fu. « A Study For Orbit Representation And Simplified Orbit Determination Methods ». Queensland University of Technology, 2003. http://eprints.qut.edu.au/15895/.

Texte intégral
Résumé :
This research effort is concerned with the methods of simplified orbit determination and orbit representation and their applications for Low Earth Orbit (LEO) satellite missions, particularly addressing the operational needs of the FedSat mission. FedSat is the first Australian-built satellite in over thirty years. The microsatellite is approximately 50cm cubed with a mass of 58 kg. The satellite was successfully placed into a low-earth near-polar orbit at an altitude of 780km by the Japanese National Space Development Agency (NASDA) H-IIA launch vehicle on 14, December 2002. Since then, it has been streaming scientific data to its ground station in Adelaide almost daily. This information is used by Australian and international researchers to study space weather, to help improve the design of space computers, communication systems and other satellite technology, and for research into navigation and satellite tracking. This research effort addresses four practical issues regarding the FedSat mission and operations. First, unlike most satellite missions, the GPS receiver onboard FedSat operates in a duty-cycle mode due to the limitations of the FedSat power supply. This causes significant difficulties for orbit tracking, Precise Orbit Determination and scientific applications. A covariance analysis was performed before the mission launch to assess the orbit performance under different operational modes. The thesis presents the analysis methods and results. Second, FedSat supports Ka-band tracking experiments that require a pointing accuracy of 0.03 degree. The QUT GPS group is obligated to provide the GPS precise orbit solution to meet this requirement. Ka-band tracking requests satellite orbital position at any instant time with respect to any of the observation stations. Because orbit determination and prediction software only provide satellite orbital data at a discrete time point, it is necessary to find a way to represent the satellite orbit as a continuous trajectory with discrete observation data, able to obtain the position of the satellite at the time of interest. For this purpose, an orbit interpolation algorithm using the Chebyshev polynomial was developed and applied to Ka-band tracking applications. The thesis will describe the software and results. Third, since the launch of FedSat, investigators have received much flight GPS data. Some research was invested in the analysis of FedSat orbit performance, GPS data quality and the quality of the onboard navigation solutions. Studies have revealed that there are many gross errors in the FedSat onboard navigation solution (ONS). Although the 1-sigma accuracy of each component is about 20 m, there are more than 11 %positioning errors that fall outside +/-50m, and 5% of the errors are outside the 100mbound. The 3D RMS values would be 35m, 87m, and 173m for the above three cases respectively. The FedSat ONS uncertainties are believed to be approximately three times greater than those from other satellite missions. Due to the high percentage of outlier solutions, it would be dangerous to use these without first applying data detection and exclusion procedures. Therefore, this thesis presents two simplified orbit determination methods that can improve the ONS. One is the "geometric method", which makes use of delta-position solutions derived from carrier phase differences between two epochs to smooth the code-based navigation solutions. The algorithms were tested using SAC-C GPS data and showing some improvement. The second method is the "dynamic method", which uses orbit dynamics information for orbit improvements. Fourth, the FedSat ground tracking team at Adelaide use the NORAD TLE orbit for daily FedSat tracking. Research was undertaken to convert an orbit trajectory into these Two Line Elements (TLE). Algorithms for the estimation of TLE solutions from the FedSat onboard GPS navigation solutions are outlined. Numerical results have shown the effects of the unmodelled forces/perturbations in the SPG4 models for the FedSat orbit determination would reach a level of ±1000m. This only includes the orbit representation errors with TLE data sets. The total FedSat orbit propagation should include both the orbit propagation and orbit representation terms. The analysis also demonstrates that the orbit presentation error can be reduced to ±200m and ±100mlevels with the EGM4x4 and EGM10x10 gravity field models respectively. This can meet the requirements for Ka-band tracking. However, a simplified tracking program based on numerical integration has to be developed to replace the SPG4 models.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Karmakar, Priyabrata. « Effective and efficient kernel-based image representations for classification and retrieval ». Thesis, Federation University Australia, 2018. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/165515.

Texte intégral
Résumé :
Image representation is a challenging task. In particular, in order to obtain better performances in different image processing applications such as video surveillance, autonomous driving, crime scene detection and automatic inspection, effective and efficient image representation is a fundamental need. The performance of these applications usually depends on how accurately images are classified into their corresponding groups or how precisely relevant images are retrieved from a database based on a query. Accuracy in image classification and precision in image retrieval depend on the effectiveness of image representation. Existing image representation methods have some limitations. For example, spatial pyramid matching, which is a popular method incorporating spatial information in image-level representation, has not been fully studied to date. In addition, the strengths of pyramid match kernel and spatial pyramid matching are not combined for better image matching. Kernel descriptors based on gradient, colour and shape overcome the limitations of histogram-based descriptors, but suffer from information loss, noise effects and high computational complexity. Furthermore, the combined performance of kernel descriptors has limitations related to computational complexity, higher dimensionality and lower effectiveness. Moreover, the potential of a global texture descriptor which is based on human visual perception has not been fully explored to date. Therefore, in this research project, kernel-based effective and efficient image representation methods are proposed to address the above limitations. An enhancement is made to spatial pyramid matching in terms of improved rotation invariance. This is done by investigating different partitioning schemes suitable to achieve rotation-invariant image representation and the proposal of a weight function for appropriate level contribution in image matching. In addition, the strengths of pyramid match kernel and spatial pyramid are combined to enhance matching accuracy between images. The existing kernel descriptors are modified and improved to achieve greater effectiveness, minimum noise effects, less dimensionality and lower computational complexity. A novel fusion approach is also proposed to combine the information related to all pixel attributes, before the descriptor extraction stage. Existing kernel descriptors are based only on gradient, colour and shape information. In this research project, a texture-based kernel descriptor is proposed by modifying an existing popular global texture descriptor. Finally, all the contributions are evaluated in an integrated system. The performances of the proposed methods are qualitatively and quantitatively evaluated on two to four different publicly available image databases. The experimental results show that the proposed methods are more effective and efficient in image representation than existing benchmark methods.
Doctor of Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ceci, Marcello <1983&gt. « Interpreting Judgements using Knowledge Representation Methods and Computational Models of Argument ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/6106/1/Marcello_Ceci_tesi.pdf.

Texte intégral
Résumé :
The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Ceci, Marcello <1983&gt. « Interpreting Judgements using Knowledge Representation Methods and Computational Models of Argument ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/6106/.

Texte intégral
Résumé :
The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Generazio, Hòa. « Consistency of representation for disaggregation from constructive to virtual combat simulations ». Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/30773.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Nygaard, Ranveig. « Shortest path methods in representation and compression of signals and image contours ». Doctoral thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2000. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1182.

Texte intégral
Résumé :

Signal compression is an important problem encountered in many applications. Various techniques have been proposed over the years for adressing the problem. The focus of the dissertation is on signal representation and compression by the use of optimization theory, more shortest path methods.

Several new signal compression algorithms are presented. They are based on the coding of line segments which are used to spproximate, and thereby represent, the signal. These segments are fit in a way that is optimal given some constraints on the solution. By formulating the compession problem as a graph theory problem, shortest path methods can be applied in order to yeild optimal compresson with respect to the given constraints.

The approaches focused on in this dissertaion mainly have their origin in ECG comression and is often referred to as time domain compression methods. Coding by time domain methods is based on the idea of extracting a subset of significant signals samples to represent the signal. The key to a successful algoritm is a good rule for determining the most significant samples. Between any two succeeding samples in the extracted smaple set, different functions are applied in reconstruction of the signal. These functions are fitted in a wy that guaratees minimal reconstruction error under the gien constraints. Two main categories of compression schemes are developed:

1. Interpolating methods, in which it is insisted on equality between the original and reconstructed signal at the points of extraction.

2. Non-interpolating methods, where the inerpolatian restriction is released.

Both first and second order polynomials are used in reconstruction of the signal. There is solso developed an approach were multiple error measures are applied within one compression algorithm.

The approach of extracting the most significant smaples are further developed by measuring the samples in terms of the number of bits needed to encode such samples. This way we develop an approach which is optimal in the ratedistortion sense.

Although the approaches developed are applicable to any type of signal, the focus of this dissertaion is on the compression of electrodiogram (ECG) signals and image contours, ECG signal compression has traditionally been

Styles APA, Harvard, Vancouver, ISO, etc.
14

Ezzat, Shannon. « Representation Growth of Finitely Generated Torsion-Free Nilpotent Groups : Methods and Examples ». Thesis, University of Canterbury. Mathematics and Statistics, 2012. http://hdl.handle.net/10092/7235.

Texte intégral
Résumé :
This thesis concerns representation growth of finitely generated torsion-free nilpotent groups. This involves counting equivalence classes of irreducible representations and embedding this counting into a zeta function. We call this the representation zeta function. We use a new, constructive method to calculate the representation zeta functions of two families of groups, namely the Heisenberg group over rings of quadratic integers and the maximal class groups. The advantage of this method is that it is able to be used to calculate the p-local representation zeta function for all primes p. The other commonly used method, known as the Kirillov orbit method, is unable to be applied to these exceptional cases. Specifically, we calculate some exceptional p-local representation zeta functions of the maximal class groups for some well behaved exceptional primes. Also, we describe the Kirillov orbit method and use it to calculate various examples of p-local representation zeta functions for almost all primes p.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Tråsdahl, Øystein. « High order methods for partial differential equations : geometry representation and coordinate transformations ». Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-17077.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Botes, Danniëll. « Few group cross section representation based on sparse grid methods / Danniëll Botes ». Thesis, North-West University, 2012. http://hdl.handle.net/10394/8845.

Texte intégral
Résumé :
This thesis addresses the problem of representing few group, homogenised neutron cross sections as a function of state parameters (e.g. burn-up, fuel and moderator temperature, etc.) that describe the conditions in the reactor. The problem is multi-dimensional and the cross section samples, required for building the representation, are the result of expensive transport calculations. At the same time, practical applications require high accuracy. The representation method must therefore be efficient in terms of the number of samples needed for constructing the representation, storage requirements and cross section reconstruction time. Sparse grid methods are proposed for constructing such an efficient representation. Approximation through quasi-regression as well as polynomial interpolation, both based on sparse grids, were investigated. These methods have built-in error estimation capabilities and methods for optimising the representation, and scale well with the number of state parameters. An anisotropic sparse grid integrator based on Clenshaw-Curtis quadrature was implemented, verified and coupled to a pre-existing cross section representation system. Some ways to improve the integrator’s performance were also explored. The sparse grid methods were used to construct cross section representations for various Light Water Reactor fuel assemblies. These reactors have different operating conditions, enrichments and state parameters and therefore pose different challenges to a representation method. Additionally, an example where the cross sections have a different group structure, and were calculated using a different transport code, was used to test the representation method. The built-in error measures were tested on independent, uniformly distributed, quasi-random sample points. In all the cases studied, interpolation proved to be more accurate than approximation for the same number of samples. The primary source of error was found to be the Xenon transient at the beginning of an element’s life (BOL). To address this, the domain was split along the burn-up dimension into “start-up” and “operating” representations. As an alternative, the Xenon concentration was set to its equilibrium value for the whole burn-up range. The representations were also improved by applying anisotropic sampling. It was concluded that interpolation on a sparse grid shows promise as a method for building a cross section representation of sufficient accuracy to be used for practical reactor calculations with a reasonable number of samples.
Thesis (MSc Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2013.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Jerrelind, Jakob. « Tracking of Pedestrians Using Multi-Target Tracking Methods with a Group Representation ». Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-172579.

Texte intégral
Résumé :
Multi-target tracking (MTT) methods estimate the trajectory of targets from noisy measurement; therefore, they can be used to handle the pedestrian-vehicle interaction for a moving vehicle. MTT has an important part in assisting the Automated Driving System and the Advanced Driving Assistance System to avoid pedestrian-vehicle collisions. ADAS and ADS rely on correct estimates of the pedestrians' position and velocity, to avoid collisions or unnecessary emergency breaking of the vehicle. Therefore, to help the risk evaluation in these systems, the MTT needs to provide accurate and robust information of the trajectories (in terms of position and velocity) of the pedestrians in different environments. Several factors can make this problem difficult to handle for instance in crowded environments the pedestrians can suffer from occlusion or missed detection. Classical MTT methods, such as the global nearest neighbour filter, can in crowded environments fail to provide robust and accurate estimates. Therefore, more sophisticated MTT methods should be used to increase the accuracy and robustness and, in general, to improve the tracking of targets close to each other. The aim of this master's thesis is to improve the situational awareness with respect to pedestrians and pedestrian-vehicle interactions. In particular, the task is to investigate if the GM-PHD and the GM-CPHD filter improve pedestrian tracking in urban environments, compared to other methods presented in the literature.  The proposed task can be divided into three parts that deal with different issues. The first part regards the significance of different clustering methods and how the pedestrians are grouped together. The implemented algorithms are the distance partitioning algorithm and the Gaussian mean shift clustering algorithm. The second part regards how modifications of the measurement noise levels and the survival of targets based on the target location, with respect to the vehicle's position, can improve the tracking performance and remove unwanted estimates. Finally, the last part regards the impact the filter estimates have on the tracking performance and how important accurate detections of the pedestrians are to improve the overall tracking. From the result the distance partitioning algorithm is the favourable algorithm, since it does not split larger groups. It is also seen that the proposed filters provide correct estimates of pedestrians in events of occlusion or missed detections but suffer from false estimates close to the ego vehicle due to uncertain detections. For the comparison, regarding the improvements, a classic standard MTT filter applying the global nearest neighbour method for the data association is used as the baseline. To conclude; the GM-CPHD filter proved to be the best out of the two proposed filters in this thesis work and performed better also compared to other methods known in the literature. In particular, its estimates survived for a longer period of time in presence of missed detection or occlusion. The conclusion of this thesis work is that the GM-CPHD filter improves the tracking performance and the situational awareness of the pedestrians.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Csige, Tamás. « K-theoretic methods in the representation theory of p-adic analytic groups ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17697.

Texte intégral
Résumé :
Sei G eine p-adische analytische gruppe, welche die direkte Summe einer torsionfreien p-adische analytische gruppe H mit zerfallender halbeinfacher Liealgebra und einer n-dimensionalen abelschen p-adische analytische gruppe Z ist. In Kapitel 3 zeigen wir folgenden Satz: Sei M ein endlich erzeugter Torsionmodul über der Iwasawaalgebra von G, welcher keine nichtrivialen pseudo-null-Untermoduln besitzt. Dann ist q(M), das Bild von M in der Quotientenkategorie Q, genau dann volltreu, wenn M als Modul über der Iwasawaalgebra von Z torsionsfrei ist. Hierbei bezeichne Q den Serre-Quotienten der Kategorie der Moduln über der Iwasawaalgebra von G nach der Serre-Unterkategorie der pseudo-null-Moduln. In Kapitel 4 zeigen wir folgenden Satz: Es bezeichne T die Kategorie, deren Objekte die endlich erzeugten Modulen über der Iwasawaalgebra von G sind, welche auch als Moduln über der Iwasawaalgebra von H endlich erzeugt sind. Seien M, N zwei Objekte von T. Wir nehmen an, dass M, N keine nichttrivialen pseudo-null-Untermoduln besitzen und q(M) in Q volltreu ist. Dann gilt: Ist [M]=[N] in der Grothendieckgruppe von Q, so ist das Bild von N ebenfalls volltreu. In Kapitel 5 zeugen wir folgenden Satz: Sei G eine beliebige p-adische analytische Gruppe, welche keine Element der Ordung p besitzt. Dann sind die Grothendieckgruppen der Algebra stetiger Distributionen und der Algebra beschränkter Distributionen isomorph zu c Kopien des Rings der ganzen Zahlen, wobei c die Anzahl der p-regulären Konjugationsklassen des Quotienten von G nach einer offenen uniformen pro-p-Untergruppe H bezeichnet.
Let G be a compact p-adic analytic group with no element of order p such that it is the direct sum of a torsion free compact p-adic analytic group H whose Lie algebra is split semisimple and an abelian p-adic analytic group Z of dimension n. In chapter 3, we show that if M is a finitely generated torsion module over the Iwasawa algebra of G with no non-zero pseudo-null submodule, then the image q(M) of M via the quotient functor q is completely faithful if and only if M is torsion free over the Iwasawa algebra of Z. Here the quotient functor q is the unique functor from the category of modules over the Iwasawa algebra of G to the quotient category with respect to the Serre subcategory of pseudo-null modules. In chapter 4, we show the following: Let M, N be two finitely generated modules over the Iwasawa algebra of G such that they are objects of the category Q of those finitely generated modules over the Iwasaw algebra of G which are also finitely generated as modules over the Iwasawa algebra of H. Assume that q(M) is completely faithful and [M] =[N] in the Grothendieck group of Q. Then q(N) is also completely faithful. In chapter 6, we show that if G is any compact p-adic analytic group with no element of order p, then the Grothendieck groups of the algebras of continuous distributions and bounded distributions are isomorphic to c copies of the ring of integers where c denotes the number of p-regular conjugacy classes in the quotient group of G with an open normal uniform pro-p subgroup H of G.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Haines, Cory. « Race, Gender, and Sexuality Representation in Contemporary Triple-A Video Game Narratives ». Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/94573.

Texte intégral
Résumé :
By conducting both qualitative and quantitative analysis of data from interviews and game content, I examine representations of race, gender, and sexuality in contemporary video-game narratives. I use data from interviews to show how they view their representations in this medium and to set categorical criteria for an interpretive content analysis. I analyze a sample of top-selling narrative-driven video games in the United States released from 2016-2019. My content coding incorporates aforementioned interview data as well as theoretical-based and intersectional concepts on video game characters and their narratives. The content analysis includes measures of narrative importance, narrative role, positivity of representation, and demographic categories of characters, though the scale of this study may not allow for a full test of intersectional theory of links between demographics and roles. Interview and content analysis results suggest an overrepresentation of white characters and extreme under-representation of non-white women.
I examine representations of race, gender, and sexuality in contemporary video-game narratives. I use data from interviews to show how people view their representations in video games and to set a guide for analyzing the games themselves. I analyze a sample of top-selling narrativedriven video games in the United States released from 2016-2019. My content coding incorporates aforementioned interview data as well as theoretical-based and intersectional concepts on video game characters and their narratives. The content analysis includes measures of narrative importance, narrative role, positivity of representation, and demographic categories of characters, though the scale of this study may not allow for a full test of intersectional theory of links between demographics and roles. Interview and content analysis results suggest an overrepresentation of white characters and extreme under-representation of non-white women.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Levy, Joseph Patrick. « Methods of modelling the mental representation of individuals derived from descriptions in text ». Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/19049.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Moorhouse, Michael John. « New methods of protein representation and identification of their meaning in a structural context ». Thesis, University of Birmingham, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399033.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Shanmugam, Divy. « A tale of two time series methods : representation learning for improved distance and risk metrics ». Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119575.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 43-49).
In this thesis, we present methods in representation learning for time series in two areas: metric learning and risk stratification. We focus on metric learning due to the importance of computing distances between examples in learning algorithms and present Jiffy, a simple and scalable distance metric learning method for multivariate time series. Our approach is to reframe the task as a representation learning problem -- rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective. Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods. We then focus on risk stratification because of its clinical importance in identifying patients at high risk for an adverse outcome. We use segments of a patient's ECG signal to predict that patient's risk of cardiovascular death within 90 days. In contrast to other work, we work directly with the raw ECG signal to learn a representation with predictive power. Our method produces a risk metric for cardiovascular death with state-of-the-art performance when compared to methods that rely on expert-designed representations.
by Divya Shanmugam.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Handschuh, Stefan. « Numerical methods in Tensor Networks ». Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-159672.

Texte intégral
Résumé :
In many applications that deal with high dimensional data, it is important to not store the high dimensional object itself, but its representation in a data sparse way. This aims to reduce the storage and computational complexity. There is a general scheme for representing tensors with the help of sums of elementary tensors, where the summation structure is defined by a graph/network. This scheme allows to generalize commonly used approaches in representing a large amount of numerical data (that can be interpreted as a high dimensional object) using sums of elementary tensors. The classification does not only distinguish between elementary tensors and non-elementary tensors, but also describes the number of terms that is needed to represent an object of the tensor space. This classification is referred to as tensor network (format). This work uses the tensor network based approach and describes non-linear block Gauss-Seidel methods (ALS and DMRG) in the context of the general tensor network framework. Another contribution of the thesis is the general conversion of different tensor formats. We are able to efficiently change the underlying graph topology of a given tensor representation while using the similarities (if present) of both the original and the desired structure. This is an important feature in case only minor structural changes are required. In all approximation cases involving iterative methods, it is crucial to find and use a proper initial guess. For linear iteration schemes, a good initial guess helps to decrease the number of iteration steps that are needed to reach a certain accuracy, but it does not change the approximation result. For non-linear iteration schemes, the approximation result may depend on the initial guess. This work introduces a method to successively create an initial guess that improves some approximation results. This algorithm is based on successive rank 1 increments for the r-term format. There are still open questions about how to find the optimal tensor format for a given general problem (e.g. storage, operations, etc.). For instance in the case where a physical background is given, it might be efficient to use this knowledge to create a good network structure. There is however, no guarantee that a better (with respect to the problem) representation structure does not exist.
Styles APA, Harvard, Vancouver, ISO, etc.
24

MARCHISIO, VALERIO. « Malliavin representation formulas for Asian options in a jump-diffusion model ». Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/1001.

Texte intégral
Résumé :
In primo luogo è sviluppato un calcolo di Malliavin unificato in un contesto di tipo jump-diffusion, considerando tutti i rumori stocastici (moto Browniano, tempi e ampiezze dei salti) e ottenendo una formula di integrazione per parti che è il punto di inizio del lavoro. I risultati sono applicati per studiare formule di rappresentazione sia per le sensitività (delta) sia per la media condizionale (in termini di una non condizionale) per un processo bidimensionale Z_t=(X_t,Y_t)$, in cui X è un processo jump diffusion e Y_t=\int_0^t X_r dr. Infine, sono studiati dei problemi legati alla Finanza (price/delta di opzioni Asiatiche). Diversi esempi sono analizzati in dettaglio con applicazioni numeriche.
We first develop a unifying Malliavin calculus in a jump-diffusion context, by taking into account all the randomness involved (Brownian motion, jump times and jump amplitudes) and by stating an integration by parts formula which gives the starting point of our work. The results are then applied to study representation formulas both for sensitivities (delta) and conditional expectations (in terms of non conditional Z_t=(X_t,Y_t), in which X stands for a jump diffusion and Y_t=\int_0^t X_r dr. Therefore, the link with problems arising from Finance (price/delta of Asian options) is studied. Several examples are analyzed in details and equipped with numerical studies.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Karagevrekis, Mersini. « Linguistic study of methods of representation of speech and thought in selected Modern Greek literary texts ». Thesis, University of Leeds, 1992. http://etheses.whiterose.ac.uk/21073/.

Texte intégral
Résumé :
This thesis attempts a systematic analysis of the stylistic devices used in Modern Greek fictional writing for the representation of a character's speech and thought. It specifically focuses on the study of those techniques in which the narrator's overtness is kept to a minimum, i.e. Free Indirect Discourse, also known as "style indirect libre", and Quoted Monologue. The present study consists of four chapters. Chapter 1 discusses areas of narrative structure of which many contradictory accounts are offered, and critics' attempts to define modes of consciousness. It also briefly outlines the Modern Greek tense system as a basis for the subsequent analysis of Modern Greek fictional devices. Chapters 2, 3 and 4 are the analytical chapters where speech and thought presentation techniques, ranging from the more diegetic to the more mimetic, are investigated. In the analysis speech and thought presentation modes are treated separately not only for reasons of clarity but because their effects are different. My examples are taken from selected nineteenth and twentieth century Modern Greek literary texts. The passages are given in the original Greek but a translation in English is also included. Chapter 2 deals with speech. All five speech categories (Le. Narrative Report of Speech Acts, Indirect Speech, Free Indirect Speech, Direct Speech and Free Direct Speech) are examined but special emphasis is placed on the analysis of Free Indirect Speech and on the identification of its markers in first and third-person narratives. Chapter 3 specifically concentrates on the analysis of Free Indirect Thought in first and third-person narratives and on the isolation of its indices. Its effects are also examined. Chapter 4 studies the technique of Quoted Monologue in first and third-person narratives. It also includes a discussion of its effects. My conclusion summarizes the results of the research and underlines the necessity for further investigation in this area.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Feng, Yunyi. « Identification of Medical Coding Errors and Evaluation of Representation Methods for Clinical Notes Using Machine Learning ». Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1555421482252775.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Eason, Christina. « Their Lordships divided ? : the representation of women in the transitional House of Lords 1999-2009 ». Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/their-lordships-divided-the-representation-of-women-in-the-transitional-house-of-lords-19992009(77d08d32-d7e2-4b77-afc8-dedd1987d4b7).html.

Texte intégral
Résumé :
This thesis set out to discern how women's representation, as a multi-faceted concept and process, plays out in the context of the House of Lords. The primary motivation of this inquiry concerned the reality that women are persistently under-represented in political chambers worldwide. Beyond this, scholarship has overlooked the site of the House of Lords despite significant advances made in women's presence that facilitate closer analysis. This is also compounded by the status of the chamber itself: in its 'transitional' phase post the passing of the House of Lords Act 1999 the chamber is suggested to act with greater legitimacy and effectiveness. Finally concentration upon the representation of women in the transitional House of Lords is pertinent as the chamber remains in a state of flux and there is an opportunity to prioritise women's representation as a key plank of the reform agenda. Normative feminist interpretations of representation are the primary frameworks of analysis. Methodologically, this research inquiry synthesised and triangulated the use of quantitative and qualitative research techniques in order to unpack the processes and influences upon all dimensions of women's political representation in the House of Lords. This helped to present a sufficiently nuanced analysis. There have been obvious attempts to numerically feminise the chamber, although there are systemic de facto and de jure reserved seats for men in the chamber which guard against radical improvements in women's descriptive presence. Women peers undertake important roles and the House of Lords maintains a culture and institutional norms that are befitting for women and feminised styles of politics which is positive for the symbolic representation of women. Finally, women peers actively seek to represent women through the agenda-setting features of the Lords, although the way this is manifested is mediated by political affiliation. The opportunities to substantively represent women through the legislative features of the House of Lords are narrower, although both male and female peers have successfully influenced legislative output to act for women.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Tissier, Julien. « Improving methods to learn word representations for efficient semantic similarites computations ». Thesis, Lyon, 2020. http://www.theses.fr/2020LYSES008.

Texte intégral
Résumé :
De nombreuses applications en traitement du langage naturel (TALN) reposent sur les représentations de mots, ou “word embeddings”. Ces représentations doivent capturer à la fois de l’information syntaxique et sémantique pour donner des bonnes performances dans les tâches en aval qui les utilisent. Cependant, les méthodes courantes pour les apprendre utilisent des textes génériques comme Wikipédia qui ne contiennent pas d’information sémantique précise. De plus, un espace mémoire important est requis pour pouvoir les sauvegarder car le nombre de représentations de mots à apprendre peut être de l’ordre du million. Le sujet de ma thèse est de développer de nouveaux algorithmes pour améliorer l’information sémantique dans les word embeddings tout en réduisant leur taille en mémoire lors de leur utilisation dans des tâches en aval de TALN.La première partie de mes travaux améliore l’information sémantique contenue dans les word embeddings. J’ai développé dict2vec, un modèle qui utilise l’information des dictionnaires linguistiques lors de l’apprentissage des word embeddings. Les word embeddings appris par dict2vec obtiennent des scores supérieurs d’environ 15% par rapport à ceux appris avec d’autres méthodes sur des tâches de similarités sémantiques de mots. La seconde partie de mes travaux consiste à réduire la taille mémoire des word embeddings. J’ai développé une architecture basée sur un auto-encodeur pour transformer des word embeddings à valeurs réelles en vecteurs binaires, réduisant leur taille mémoire de 97% avec seulement une baisse de précision d’environ 2% dans des tâches de TALN en aval
Many natural language processing applications rely on word representations (also called word embeddings) to achieve state-of-the-art results. These numerical representations of the language should encode both syntactic and semantic information to perform well in downstream tasks. However, common models (word2vec, GloVe) use generic corpus like Wikipedia to learn them and they therefore lack specific semantic information. Moreover it requires a large memory space to store them because the number of representations to save can be in the order of a million. The topic of my thesis is to develop new learning algorithms to both improve the semantic information encoded within the representations while making them requiring less memory space for storage and their application in NLP tasks.The first part of my work is to improve the semantic information contained in word embeddings. I developed dict2vec, a model that uses additional information from online lexical dictionaries when learning word representations. The dict2vec word embeddings perform ∼15% better against the embeddings learned by other models on word semantic similarity tasks. The second part of my work is to reduce the memory size of the embeddings. I developed an architecture based on an autoencoder to transform commonly used real-valued embeddings into binary embeddings, reducing their size in memory by 97% with only a loss of ∼2% in accuracy in downstream NLP tasks
Styles APA, Harvard, Vancouver, ISO, etc.
29

Waswa, Anne, et Mitchelle Wambua. « Teaching and Learning of Mathematics in Sweden : Methods, Resources and Assessment in Mathematics ». Thesis, Linnéuniversitetet, Institutionen för utbildningsvetenskap (UV), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-45007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Dragiev, Stanimir [Verfasser], et Marc [Akademischer Betreuer] Toussaint. « An object representation and methods for uncertainty-aware shape estimation and grasping / Stanimir Dragiev. Betreuer : Marc Toussaint ». Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2015. http://d-nb.info/1071089145/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Li, Bowei. « Implementation of full permeability tensor representation in a dual porosity reservoir simulator ». Access restricted to users with UT Austin EID Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3034930.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Schiltz, Gary. « Representation of knowledge using Sowa's conceptual graphs : an implementation of a set of tools ». Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9951.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Yus, Diego. « Long-term vehicle movement prediction using Machine Learning methods ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233556.

Texte intégral
Résumé :
The problem of location or movement prediction can be described as the task of predicting the future location of an item using the past locations of that item. It is a problem of increasing interest with the arrival of location-based services and autonomous vehicles. Even if short term prediction is more commonly studied, especially in the case of vehicles, long-term prediction can be useful in many applications like scheduling, resource managing or traffic prediction. In this master thesis project, I present a feature representation of movement that can be used for learning of long-term movement patterns and for long-term movement prediction both in space and time. The representation relies on periodicity in data and is based on weighted n-grams of windowed trajectories. The algorithm is evaluated on heavy transport vehicles movement data to assess its ability to from a search index retrieve vehicles that with high probability will move along a route that matches a desired transport mission. Experimental results show the algorithm is able to achieve a consistent low prediction distance error rate across different transport lengths in a limited geographical area under business operation conditions. The results also indicate that the total population of vehicles in the index is a critical factor in the algorithm performance and therefore in its real-world applicability.
Lokaliserings- eller rörelseprognosering kan beskrivas som uppgiften att förutsäga ett objekts framtida placering med hjälp av de tidigare platserna för objektet. Intresset för problemet ökar i och med införandet av platsbaserade tjänster och autonoma fordon. Även om det är vanligare att studera kortsiktiga förutsägelser, särskilt när det gäller fordon, kan långsiktiga förutsägelser vara användbara i många applikationer som schemaläggning, resurshantering eller trafikprognoser. I detta masterprojekt presenterar jag en feature-representation av rörelse som kan användas för att lära in långsiktiga rörelsemönster och för långsiktig rörelseprediktion både i rymden och tiden. Representationen bygger på periodicitet i data och är baserad på att dela upp banan i fönster och sedan beräkna viktade n-grams av banorna från de olika fönstren. Algoritmen utvärderas på transportdata för tunga transportfordon för att bedöma dess förmåga att från ett sökindex hämta fordon som med stor sannolikhet kommer att röra sig längs en rutt som matchar ett önskat transportuppdrag. Experimentella resultat visar att algoritmen kan uppnå ett konsekvent lågt fel i relativt predikterat avstånd över olika transportlängder i ett begränsat geografiskt område under verkliga förhållanden. Resultaten indikerar även att den totala populationen av fordon i indexet är en kritisk faktor för algoritmens prestanda och därmed även för dess applicerbarhet för verklig användning.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Tomczak, Jakub. « Algorithms for knowledge discovery using relation identification methods ». Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2563.

Texte intégral
Résumé :
In this work a coherent survey of problems connected with relational knowledge representation and methods for achieving relational knowledge representation were presented. Proposed approach was shown on three applications: economic case, biomedical case and benchmark dataset. All crucial definitions were formulated and three main methods for relation identification problem were shown. Moreover, for specific relational models and observations’ types different identification methods were presented.
Double Diploma Programme, polish supervisor: prof. Jerzy Świątek, Wrocław University of Technology
Styles APA, Harvard, Vancouver, ISO, etc.
35

Hannu, Louise. « Representation inom barnens litterära värld : Normalisera icke-normativt innehåll ». Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79712.

Texte intégral
Résumé :
A lot of children’s books in today’s society are meant to create diversity with a more including content but it often leads to a content full of stereotypes. The books become stigmatized rather than representative of the diversity in different people. This is an issue since children learn a lot from books even in the technological world we live in right now. That is why the book Pim & Purpurfolket was made. This is the master thesis by Louise Hannu, at Luleå University of Technology is about analyzing and defying the ongoing trend of stereotypical content in children’s literature, by using a norm creative approach.
I dagens samhälle finns det många böcker som är avsedda att skapa mångfald med ett mer inkluderande innehåll men detta leder ofta till ett innehåll fullt av stereotyper. Böckerna blir stigmatiserade snarare än representativa. Detta är ett stort problem då barn lär sig mycket via böcker, även i den teknologiska värld vi lever i just nu. Det är därför boken Pim & Purpurfolket gjordes. Detta examensarbete, skapat av Louise Hannu vid Luleå Tekniska Universitet, handlar om att analysera och trotsa den pågående, stereotypa trend inom barnens litterära värld genom att använda en normkreativ strategi.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Fukumori, Ichiro. « Efficient representation of the hydrographic structure of the north Atlantic Ocean and aspects of the circulation from objective methods ». Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/52903.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 1989.
Includes bibliographical references (leaves 232-236).
by Ichiro Fukumori.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Khac, Do Nguyen. « Situated cognition and Agile software development : A comparison of three methods ». Thesis, Linköpings universitet, Institutionen för datavetenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-62824.

Texte intégral
Résumé :
Agile programming methods have become popular in software development projects. These methods increase productivity and support teamwork processes. In this thesis, we have analyzed three well-known Agile methods - Scrum, Extreme Programming and Crystal Orange - from the perspective of situated cognition to investigate how well the methods support cognition. Specifically, we looked at how the methods aid memory and attention through the use of external representations. The study suggests that the methods support different aspects of situated cognition reasonably well. However, among the investigated methods, Scrum stands out due to aspects of task representation (progress charts), its approaches to externalize what-to-do (memory), and the means to focus on the important programming tasks for the day (attention).
Styles APA, Harvard, Vancouver, ISO, etc.
38

Lundberg, Jacob. « Resource Efficient Representation of Machine Learning Models : investigating optimization options for decision trees in embedded systems ». Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162013.

Texte intégral
Résumé :
Combining embedded systems and machine learning models is an exciting prospect. However, to fully target any embedded system, with the most stringent resource requirements, the models have to be designed with care not to overwhelm it. Decision tree ensembles are targeted in this thesis. A benchmark model is created with LightGBM, a popular framework for gradient boosted decision trees. This model is first transformed and regularized with RuleFit, a LASSO regression framework. Then it is further optimized with quantization and weight sharing, techniques used when compressing neural networks. The entire process is combined into a novel framework, called ESRule. The data used comes from the domain of frequency measurements in cellular networks. There is a clear use-case where embedded systems can use the produced resource optimized models. Compared with LightGBM, ESRule uses 72ˆ less internal memory on average, simultaneously increasing predictive performance. The models use 4 kilobytes on average. The serialized variant of ESRule uses 104ˆ less hard disk space than LightGBM. ESRule is also clearly faster at predicting a single sample.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Distel, Felix. « Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-70199.

Texte intégral
Résumé :
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Wang, Yuancheng. « Performance of supertree methods for estimating species trees ». Thesis, University of Canterbury. Mathematics and Statistics, 2010. http://hdl.handle.net/10092/4644.

Texte intégral
Résumé :
Phylogenetics is the research of ancestor-descendant relationships among different groups of organisms, for example, species or populations of interest. The datasets involved are usually sequence alignments of various subsets of taxa for various genes. A major task of phylogenetics is often to combine estimated gene trees from many loci sampled from the genes into an overall estimate species tree topology. Eventually, one can construct the tree of life that depicts the ancestor-descendant relationships for all known species around the world. If there is missing data or incomplete sampling in the datasets, then supertree methods can be used to assemble gene trees with different subsets of taxa into an estimated overall species tree topology. In this study, we assume that gene tree discordance is solely due to incomplete lineage sorting under the multispecies coalescent model (Degnan and Rosenberg, 2009). If there is missing data or incomplete sampling in the datasets, then supertree methods can be used to assemble gene trees with different subsets of taxa into an estimated species tree topology. In addition, we examine the performance of the most commonly used supertree method (Wilkinson et al., 2009), namely matrix representation with parsimony (MRP), to explore its statistical properties in this setting. In particular, we show that MRP is not statistically consistent. That is, an estimated species tree topology other than the true species tree topology is more likely to be returned by MRP as the number of gene trees increases. For some situations, using longer branch lengths, randomly deleting taxa or even introducing mutation can improve the performance of MRP so that the matching species tree topology is recovered more often. In conclusion, MRP is a supertree method that is able to handle large amounts of conflict in the input gene trees. However, MRP is not statistically consistent, when using gene trees arise from the multispecies coalescent model to estimate species trees.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Csige, Tamás [Verfasser], Elmar [Gutachter] Groÿe-Klönne, Peter [Gutachter] Schneider et Gergely [Gutachter] Zábrádi. « K-theoretic methods in the representation theory of p-adic analytic groups / Tamás Csige ; Gutachter : Elmar Groÿe-Klönne, Peter Schneider, Gergely Zábrádi ». Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://d-nb.info/1126004200/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Budinich, Renato [Verfasser], Gerlind [Akademischer Betreuer] Plonka-Hoch, Gerlind [Gutachter] Plonka-Hoch et Armin [Gutachter] Iske. « Adaptive Multiscale Methods for Sparse Image Representation and Dictionary Learning / Renato Budinich ; Gutachter : Gerlind Plonka-Hoch, Armin Iske ; Betreuer : Gerlind Plonka-Hoch ». Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1175625396/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Zokka, Herman Kankara. « Presentation and representation of environmental problems and problem-solving methods and processes in the Grade 10 Geography syllabus : a Namibian case study ». Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1021253.

Texte intégral
Résumé :
Environmental issues in Namibia are considered to be one of the major threats to the lives of the Namibian people (Namibia. Ministry of National Planning Commission [MNPC], 2004). This study explored problem solving as one of the teaching methods used in Grade 10 Geography syllabuses as a response to such environmental issues/risks. Geography provides learners with an understanding of the issues and risks in their world that need to be addressed in order to improve the quality of their lives and health of their environment. This study focused on how environmental problems and problem-solving methods are presented in the Namibian Grade 10 Geography syllabus and how these are represented and implemented through teacher intentionality and practice. The theoretical framework for this study was informed by two theories namely risk society and social constructivism. This study was conducted at three schools in the Rundu circuit in the Kavango region and one teacher was involved in the study at each school. This study was conducted within an interpretive research tradition and was qualitative in nature. The study used document analysis, focus group discussion and classroom observation as data generation methods. The findings of the study reveal that the complexity of environmental issues is highlighted in the syllabus and in teachers’ intentionality and practice. The findings also show that a limited variety of teaching methods were used in problem solving strategies. The study also found that problem solving was influenced by different constructivist learning principles. The study further found that limited numbers of problem-solving steps were used in the process of problem solving. The study concludes by calling for further research into problem solving strategies. This can be done to empower Geography teachers to use more complex problem solving strategies to deepen problem solving and to engage problems in more depth.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Echeverri, Daniel Ricardo. « Application of the Deconstructive Discourse as a Generative Thinking Framework ». Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1399283791.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Bodin, Camilla. « Automatic Flight Maneuver Identification Using Machine Learning Methods ». Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165844.

Texte intégral
Résumé :
This thesis proposes a general approach to solve the offline flight-maneuver identification problem using machine learning methods. The purpose of the study was to provide means for the aircraft professionals at the flight test and verification department of Saab Aeronautics to automate the procedure of analyzing flight test data. The suggested approach succeeded in generating binary classifiers and multiclass classifiers that identified six flight maneuvers of different complexity from real flight test data. The binary classifiers solved the problem of identifying one maneuver from flight test data at a time, while the multiclass classifiers solved the problem of identifying several maneuvers from flight test data simultaneously. To achieve these results, the difficulties that this time series classification problem entailed were simplified by using different strategies. One strategy was to develop a maneuver extraction algorithm that used handcrafted rules. Another strategy was to represent the time series data by statistical measures. There was also an issue of an imbalanced dataset, where one class far outweighed others in number of samples. This was solved by using a modified oversampling method on the dataset that was used for training. Logistic Regression, Support Vector Machines with both linear and nonlinear kernels, and Artifical Neural Networks were explored, where the hyperparameters for each machine learning algorithm were chosen during model estimation by 4-fold cross-validation and solving an optimization problem based on important performance metrics. A feature selection algorithm was also used during model estimation to evaluate how the performance changes depending on how many features were used. The machine learning models were then evaluated on test data consisting of 24 flight tests. The results given by the test data set showed that the simplifications done were reasonable, but the maneuver extraction algorithm could sometimes fail. Some maneuvers were easier to identify than others and the linear machine learning models resulted in a poor fit to the more complex classes. In conclusion, both binary classifiers and multiclass classifiers could be used to solve the flight maneuver identification problem, and solving a hyperparameter optimization problem boosted the performance of the finalized models. Nonlinear classifiers performed the best on average across all explored maneuvers.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Jia, Yue Verfasser], Timon [Akademischer Betreuer] Rabczuk, Klaus [Gutachter] [Gürlebeck et Alessandro [Gutachter] Reali. « Methods based on B-splines for model representation, numerical analysis and image registration / Yue Jia ; Gutachter : Klaus Gürlebeck, Alessandro Reali ; Betreuer : Timon Rabczuk ». Weimar : Institut für Strukturmechanik, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:wim2-20151210-24849.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Jia, Yue [Verfasser], Timon [Akademischer Betreuer] Rabczuk, Klaus [Gutachter] Gürlebeck et Alessandro [Gutachter] Reali. « Methods based on B-splines for model representation, numerical analysis and image registration / Yue Jia ; Gutachter : Klaus Gürlebeck, Alessandro Reali ; Betreuer : Timon Rabczuk ». Weimar : Institut für Strukturmechanik, 2015. http://d-nb.info/1116366770/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Ding, Hui. « Level of detail for granular audio-graphic rendering : representation, implementation, and user-based evaluation ». Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00913624.

Texte intégral
Résumé :
Real-time simulation of complex audio-visual scenes remains challenging due to the technically independent but perceptually related rendering process in each modality. Because of the potential crossmodal dependency of auditory and visual perception, the optimization of graphics and sound rendering, such as Level of Details (LOD), should be considered in a combined manner but not as separate issues. For instance, in audition and vision, people have perceptual limits on observation quality. Techniques of perceptually driven LOD for graphics have been greatly advanced for decades. However, the concept of LOD is rarely considered in crossmodal evaluation and rendering. This thesis is concentrated on the crossmodal evaluation of perception on audiovisual LOD rendering by psychophysical methods, based on that one may apply a functional and general method to eventually optimize the rendering. The first part of the thesis is an overview of our research. In this part, we review various LOD approaches and discuss concerned issues, especially from a crossmodal perceptual perspective. We also discuss the main results on the design, rendering and applications of highly detailed interactive audio and graphical scenes of the ANR Topophonie project, in which the thesis took place. A study of psychophysical methods for the evaluation on audio-visual perception is also presented to provide a solid knowledge of experimental design. In the second part, we focus on studying the perception of image artifacts in audio-visual LOD rendering. A series of experiments was designed to investigate how the additional audio modality can impact the visual detection of artifacts produced by impostor-based LOD. The third part of the thesis is focused on the novel extended-X3D that we designed for audio-visual LOD modeling. In the fourth part, we present a design and evaluation of the refined crossmodal LOD system. The evaluation of the audio-visual perception on crossmodal LOD system was achieved through a series of psychophysical experiments. Our main contribution is that we provide a further understanding of crossmodal LOD with some new observations, and explore it through perceptual experiments and analysis. The results of our work can eventually be used as the empirical evidences and guideline for a perceptually driven crossmodal LOD system.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Greenberg, Jane. « A Comparison of Web Resource Access Experiments:Planning for the New Millennium ». the Library of Congress, 2000. http://hdl.handle.net/10150/105784.

Texte intégral
Résumé :
Over the last few years the bibliographic control community has initiated a series of experiments that aim to improve access to the growing number of valuable information resources that are increasingly being placed on World Wide Web (here after referred to as Web resources). Much has been written about these experiments, mainly describing their implementation and features, and there has been some evaluative reporting, but there has been little comparison among these initiatives. The research reported on in this paper addresses this limitation by comparing five leading experiments in this area. The objective was to identify characteristics of success and considerations for improvement in experiments providing access to Web resources via bibliographic control methods. The experiments examined include: OCLC's CORC project; UKOLN's BIBLINK, ROADS, and DESIRE projects; and the NORDIC project. The research used a multi-case study methodology and a framework comprised of five evaluation criteria that included the experiment's organizational structure, reception, duration, application of computing technology, and use of human resources. This paper defines the Web resource access experimentation environment, reviews the study's research methodology, and highlights key findings. The paper concludes by initiating a strategic plan and by inviting conference participants to contribute their ideas and expertise to an effort will improve experimental initiatives that ultimately aim to improve access to Web resources in the new Millennium.
Styles APA, Harvard, Vancouver, ISO, etc.
50

GARBARINO, DAVIDE. « Acknowledging the structured nature of real-world data with graphs embeddings and probabilistic inference methods ». Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1092453.

Texte intégral
Résumé :
In the artificial intelligence community there is a growing consensus that real world data is naturally represented as graphs because they can easily incorporate complexity at several levels, e.g. hierarchies or time dependencies. In this context, this thesis studies two main branches for structured data. In the first part we explore how state-of-the-art machine learning methods can be extended to graph modeled data provided that one is able to represent graphs in vector spaces. Such extensions can be applied to analyze several kinds of real-world data and tackle different problems. Here we study the following problems: a) understand the relational nature and evolution of websites which belong to different categories (e-commerce, academic (p.a.) and encyclopedic (forum)); b) model tennis players scores based on different game surfaces and tournaments in order to predict matches results; c) analyze preter- m-infants motion patterns able to characterize possible neuro degenerative disorders and d) build an academic collaboration recommender system able to model academic groups and individual research interest while suggesting possible researchers to connect with, topics of interest and representative publications to external users. In the second part we focus on graphs inference methods from data which present two main challenges: missing data and non-stationary time dependency. In particular, we study the problem of inferring Gaussian Graphical Models in the following settings: a) inference of Gaussian Graphical Models when data are missing or latent in the context of multiclass or temporal network inference and b) inference of time-varying Gaussian Graphical Models when data is multivariate and non-stationary. Such methods have a natural application in the composition of an optimized stock markets portfolio. Overall this work sheds light on how to acknowledge the intrinsic structure of data with the aim of building statistical models that are able to capture the actual complexity of the real world.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie