To see the other types of publications on this topic, follow the link: Ceramics, multidimensional data analysis.

Dissertations / Theses on the topic 'Ceramics, multidimensional data analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Ceramics, multidimensional data analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Passuti, Sara. "Electrοn crystallοgrathy οf nanοdοmains in functiοnal materials." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC230.

Full text
Abstract:
L’étude des matériaux fonctionnels se concentre de plus en plus sur des échantillonscaractérisés par des nano-domaines (allant de tailles submicroniques à des dizaines denanomètres) en raison de leurs propriétés physiques intéressantes, telles que celles observéesdans les films minces ou les matériaux céramiques. Lorsqu’il faut déterminer des phases inconnuesou obtenir des informations détaillées sur la structure cristalline de ces matériaux, ladiffraction des rayons X et la microscopie électronique à transmission (MET) se heurtent à desdifficultés. Pour résoudre ce problème, une nouvelle technique de diffraction électronique (ED),dite « Scanning Precession Electron Tomography » (SPET), a été employée. La SPET combinela méthode établie d’acquisition de données 3D ED assistée par la précession (également connuesous l’acronyme PEDT) avec un balayage du faisceau d’électrons sur une région d’intérêt (ROI)de l’échantillon et ce à chaque angle d’inclinaison du porte objet. Cette procédure permet decollecter des données 3D ED à partir de plusieurs ROIs en une seule acquisition, ce qui facilitela résolution et l’affinement précis de la structure cristalline de plusieurs nano-domaines ou dezones distinctes à l’intérieur d’un seul domaine. Dans cette thèse, les potentialités de la SPETsont explorées à la fois sur des films minces d’oxyde et sur des matériaux thermoélectriques(céramiques) préparés sous forme de lamelles TEM. En outre, une nouvelle méthodologie a étédéveloppée pour analyser efficacement la grande quantité de données collectées. Cette méthodeconsiste à trier les diagrammes de diffraction en fonction de leur région d’origine, à reconstruirela série 3D ED selon les différentes ROIs et à traiter automatiquement ces données pour la résolutionet l’obtention d’affinements précis de la structure. Ce travail démontre le potentiel de laSPET pour la caractérisation cristallographique fine de matériaux nano-structurés complexes.Cette approche est complémentaire de ce qui peut être fait en imagerie ou en spectroscopie par(S)TEM ou, en diffraction, par les approches dites 4D-STEM et ACOM
The investigation of functional materials has increasingly focused on samplescharacterized by nanodomains (ranging from submicron sizes to tens of nanometers) due totheir interesting physical properties, such as those observed in thin films and ceramic materials.When unknown phases need to be determined or detailed information on the crystallinestructure of these materials is required, this presents challenges for both X-ray diffraction andtransmission electron microscopy (TEM). To address this, a novel electron diffraction (ED) technique,Scanning Precession Electron Tomography (SPET), has been employed. SPET combinesthe established precession-assisted 3D ED data acquisition method (a.k.a. Precession ElectronDiffraction Tomography – PEDT) with a scan of the electron beam on a region of interest (ROI)of the specimen at each tilt step. This procedure allows to collect 3D ED data from multipleROIs with a single acquisition, facilitating structure solution and accurate structure refinementsof multiple nanodomains or distinct areas within a single domain, at once. In this thesis, thepotentialities of SPET are explored on both oxide thin films and ceramic thermoelectric materialsprepared as TEM lamellae. Additionally, a novel methodology was developed to efficientlyanalyze the large amount of data collected. This method involves sorting the diffraction patternsaccording to their region of origin, reconstructing the diffraction tilt series of the ROI, andautomatically processing the obtained tilt series for structure solution and accurate refinements.This work demonstrates the potential of SPET for the fine crystallographic characterization ofcomplex nanostructured materials. This approach appears to be complementary to what can bedone in imaging or spectroscopy by (S)TEM or, in diffraction, by the so-called 4D-STEM andACOM approaches
APA, Harvard, Vancouver, ISO, and other styles
2

Westerlund, Per. "Business Intelligence: Multidimensional Data Analysis." Thesis, Umeå universitet, Institutionen för datavetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-138758.

Full text
Abstract:
The relational database model is probably the most frequently used database model today. It has its strengths, but it doesn’t perform very well with complex queries and analysis of very large sets of data. As computers have grown more potent, resulting in the possibility to store very large data volumes, the need for efficient analysis and processing of such data sets has emerged. The concept of Online Analytical Processing (OLAP) was developed to meet this need. The main OLAP component is the data cube, which is a multidimensional database model that with various techniques has accomplished an incredible speed-up of analysing and processing large data sets. A concept that is advancing in modern computing industry is Business Intelligence (BI), which is fully dependent upon OLAP cubes. The term refers to a set of tools used for multidimensional data analysis, with the main purpose to facilitate decision making. This thesis looks into the concept of BI, focusing on the OLAP technology and date cubes. Two different approaches to cubes are examined and compared; Multidimensiona lOnline Analytical Processing (MOLAP) and Relational Online Analytical Processing (ROLAP). As a practical part of the thesis, a BI project was implemented for the consulting company Sogeti Sverige AB. The aim of the project was to implement a prototype for easy access to, and visualisation of their internal economical data. There was no easy way for the consultants to view their reported data, such as how many hours they have been working every week, so the prototype was intended to propose a possible method. Finally, a performance study was conducted, including a small scale experiment comparing the performance of ROLAP, MOLAP and querying against the data warehouse. The results of the experiment indicates that ROLAP is generally the better choice for data cubing.
APA, Harvard, Vancouver, ISO, and other styles
3

Duch, Brown Amàlia. "Design and Analysis of Multidimensional Data Structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6647.

Full text
Abstract:
Aquesta tesi està dedicada al disseny i a l'anàlisi d'estructures de dades multidimensionals, és a dir, estructures de dades que serveixen per emmagatzemar registres $K$-dimensionals que solen representar-se com a punts en l'espai $[0,1]^K$. Aquestes estructures tenen aplicacions en diverses àrees de la informàtica com poden ser els sistemes d'informació geogràfica, la robòtica, el processament d'imatges, la world wide web, el data mining, entre d'altres.

Les estructures de dades multidimensionals també es poden utilitzar com a indexos d'estructures de dades que emmagatzemen, possiblement en memòria externa, dades més complexes que els punts.

Les estructures de dades multidimensionals han d'oferir la possibilitat de realitzar operacions d'inserció i esborrat de claus dinàmicament, a més de permetre realitzar cerques anomenades associatives. Exemples d'aquest tipus de cerques són les cerques per rangs ortogonals (quins punts cauen dintre d'un hiper-rectangle donat?) i les cerques del veí més proper (quin és el punt més proper a un punt donat?).

Podem dividir les contribucions d'aquesta tesi en dues parts:

La primera part està relacionada amb el disseny d'estructures de dades per a punts multidimensionals. Inclou el disseny d'arbres binaris $K$-dimensionals al·leatoritzats (Randomized $K$-d trees), el d'arbres quaternaris al·leatoritzats (Randomized quad trees) i el d'arbres multidimensionals amb punters de referència (Fingered multidimensional trees).

La segona part analitza el comportament de les estructures de dades multidimensionals. En particular, s'analitza el cost mitjà de les cerques parcials en arbres $K$-dimensionals relaxats, i el de les cerques per rang en diverses estructures de dades multidimensionals.

Respecte al disseny d'estructures de dades multidimensionals, proposem algorismes al·leatoritzats d'inserció i esborrat de registres per als arbres $K$-dimensionals i per als arbres quaternaris. Aquests algorismes produeixen arbres aleatoris, independentment de l'ordre d'inserció dels registres i desprès de qualsevol seqüència d'insercions i esborrats. De fet, el comportament esperat de les estructures produïdes mitjançant els algorismes al·leatoritzats és independent de la distribució de les dades d'entrada, tot i conservant la simplicitat i la flexibilitat dels arbres $K$-dimensionals i quaternaris estàndard. Introduïm també els arbres multidimensionals amb punters de referència. Això permet que les estructures multidimensionals puguin aprofitar l'anomenada localitat de referència en cerques associatives altament correlacionades.

I respecte de l'anàlisi d'estructures de dades multidimensionals, primer analitzem el cost esperat de las cerques parcials en els arbres $K$-dimensionals relaxats. Seguidament utilitzem aquest resultat com a base per a l'anàlisi de les cerques per rangs ortogonals, juntament amb arguments combinatoris i geomètrics. D'aquesta manera obtenim un estimat asimptòtic precís del cost de les cerques per rangs ortogonals en els arbres $K$-dimensionals aleatoris. Finalment, mostrem que les tècniques utilitzades es poden estendre fàcilment a d'altres estructures de dades i per tant proporcionem una anàlisi exacta del cost mitjà de cerques per rang en estructures de dades com són els arbres $K$-dimensionals estàndard, els arbres quaternaris, els tries quaternaris i els tries $K$-dimensionals.
Esta tesis está dedicada al diseño y al análisis de estructuras de datos multidimensionales; es decir, estructuras de datos específicas para almacenar registros $K$-dimensionales que suelen representarse como puntos en el espacio $[0,1]^K$. Estas estructuras de datos tienen aplicaciones en diversas áreas de la informática como son: los sistemas de información geográfica, la robótica, el procesamiento de imágenes, la world wide web o data mining, entre otras.

Las estructuras de datos multidimensionales suelen utilizarse también como índices de estructuras que almacenan, posiblemente en memoria externa, datos complejos.

Las estructuras de datos multidimensionales deben ofrecer la posibilidad de realizar operaciones de inserción y borrado de llaves de manera dinámica, pero además deben permitir realizar búsquedas asociativas en los registros almacenados. Ejemplos de búsquedas asociativas son las búsquedas por rangos ortogonales (¿qué puntos de la estructura de datos están dentro de un hiper-rectángulo dado?) y las búsquedas del vecino más cercano (¿cuál es el punto de la estructura de datos más cercano a un punto dado?).

Las contribuciones de esta tesis se dividen en dos partes:

La primera parte está dedicada al diseño de estructuras de datos para puntos multidimensionales, que incluye el diseño de los árboles binarios $K$-dimensionales aleatorios (Randomized $K$-d trees), el de los árboles cuaternarios aleatorios (Randomized quad trees), y el de los árboles multidimensionales con punteros de referencia (Fingered multidimensional trees).
La segunda parte contiene contribuciones al análisis del comportamiento de las estructuras de datos para puntos multidimensionales. En particular, damos el análisis del costo promedio de las búsquedas parciales en los árboles $K$-dimensionales relajados y el de las búsquedas por rango en varias estructuras de datos multidimensionales.


Con respecto al diseño de estructuras de datos multidimensionales, proponemos algoritmos aleatorios de inserción y borrado de registros para los árboles $K$-dimensionales y los árboles cuaternarios que producen árboles aleatorios independientemente del orden de inserción de los registros y después de cualquier secuencia de inserciones y borrados intercalados. De hecho, con la aleatorización garantizamos un buen rendimiento esperado de las estructuras de datos resultantes, que es independiente de la distribución de los datos de entrada, conservando la flexibilidad y la simplicidad de los árboles $K$-dimensionales y de los árboles cuaternarios estándar. También proponemos los árboles multidimensionales con punteros de referencia, una técnica que permite que las estructuras de datos multidimensionales exploten la localidad de referencia en búsquedas asociativas que se presentan altamente correlacionadas.

Con respecto al análisis de estructuras de datos multidimensionales, comenzamos dando un análisis preciso del costo esperado de las búsquedas parciales en los árboles $K$-dimensionales relajados. A continuación, utilizamos este resultado como base para el análisis de las búsquedas por rangos ortogonales, combinándolo con argumentos combinatorios y geométricos. Como resultado obtenemos un estimado asintótico preciso del costo de las búsquedas por rango en los árboles $K$-dimensionales relajados. Finalmente, mostramos que las técnicas utilizadas pueden extenderse fácilmente a otras estructuras de datos y por tanto proporcionamos un análisis preciso del costo promedio de búsquedas por rango en estructuras de datos como los árboles $K$-dimensionales estándar, los árboles cuaternarios, los tries cuaternarios y los tries $K$-dimensionales.
This thesis is about the design and analysis of point multidimensional data structures: data structures that store $K$-dimensional keys which we may abstract as points in $[0,1]^K$. These data structures are present in many applications of geographical information systems, image processing or robotics, among others. They are also frequently used as indexes of more complex data structures, possibly stored in external memory.

Point multidimensional data structures must have capabilities such as insertion, deletion and (exact) search of items, but in addition they must support the so called {em associative queries}. Examples of these queries are orthogonal range queries (which are the items that fall inside a given hyper-rectangle?) and nearest neighbour queries (which is the closest item to some given point?).

The contributions of this thesis are two-fold:

Contributions to the design of point multidimensional data structures: the design of randomized $K$-d trees, the design of randomized quad trees and the design of fingered multidimensional search trees;
Contributions to the analysis of the performance of point multidimensional data structures: the average-case analysis of partial match queries in relaxed $K$-d trees and the average-case analysis of orthogonal range queries in various multidimensional data structures.


Concerning the design of randomized point multidimensional data structures, we propose randomized insertion and deletion algorithms for $K$-d trees and quad trees that produce random $K$-d trees and quad trees independently of the order in which items are inserted into them and after any sequence of interleaved insertions and deletions. The use of randomization provides expected performance guarantees, irrespective of any assumption on the data distribution, while retaining the simplicity and flexibility of standard $K$-d trees and quad trees.

Also related to the design of point multidimensional data structures is the proposal of fingered multidimensional search trees, a new technique that enhances point multidimensional data structures to exploit locality of reference in associative queries.

With regards to performance analysis, we start by giving a precise analysis of the cost of partial matches in randomized $K$-d trees. We use these results as a building block in our analysis of orthogonal range queries, together with combinatorial and geometric arguments and we provide a tight asymptotic estimate of the cost of orthogonal range search in randomized $K$-d trees. We finally show that the techniques used apply easily to other data structures, so we can provide an analysis of the average cost of orthogonal range search in other data structures such as standard $K$-d trees, quad trees, quad tries, and $K$-d tries.
APA, Harvard, Vancouver, ISO, and other styles
4

Schroeder, Michael Philipp 1986. "Analysis and visualization of multidimensional cancer genomics data." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/301436.

Full text
Abstract:
Cancer is a complex disease caused by somatic alterations of the genome and epigenome in tumor cells. Increased investments and cheaper access to various technologies have built momentum for the generation of cancer genomics data. The availability of such large datasets offers many new possibilities to gain insight into cancer molecular properties. Within this scope I present two methods that exploit the broad availability of cancer genomic data: OncodriveROLE, an approach to classify mutational cancer driver genes into activating and loss of function mode of actions and MutEx, a statistical measure to assess the trend of the somatic alterations in a set of genes to be mutually exclusive across tumor samples. Nevertheless, the unprecedented dimension of the available data raises new complications for its accessibility and exploration which we try to solve with new visualization solutions: i) Gitools interactive heatmaps with prepared large scale cancer genomics datasets ready to be explored, ii) jHeatmap, an interactive heatmap browser for the web capable of displaying multidimensional cancer genomics data and designed for its inclusion into web portals, and iii) SVGMap, a web server to project data onto customized SVG figures useful for mapping experimental measurements onto the model.
El cancer és una malaltia complexa causada per alteracions somàtiques del genoma i epigenoma de les cèl•lules tumorals. Un augment d’inversions i l'accés a tecnologies de baix cost ha provocat un increment important en la generació de dades genòmiques de càncer. La disponibilitat d’aquestes dades ofereix noves possibilitats per entendre millor les propietats moleculars del càncer. En aquest àmbit, presento dos mètodes que aprofiten aquesta gran disponibilitat de dades genòmiques de càncer: OncodriveROLE, un procediment per a classificar gens “drivers” del càncer segons si el seu mode d’acció ésl'activació o la pèrdua de funció del producte gènic; i MutEx, un estadístic per a mesurar la tendència de les mutacions somàtiques a l’exclusió mútua. Tanmateix, la manca de precedents d’aquesta gran dimensió de dades fa sorgir nous problemes en quant a la seva accessibilitat i exploració, els quals intentem solventar amb noves eines de visualització: i) Heatmaps interactius de Gitools amb dades genòmiques de càncer a gran escala, a punt per ser explorades, ii) jHeatmap, un heatmap interactiu per la web capaç de mostrar dades genòmiques de cancer multidimensionals i dissenyat per la seva inclusió a portals web; i iii) SVGMap, un servidor web per traslladar dades en figures SVG customitzades, útil per a la transl•lació de mesures experimentals en un model visual.
APA, Harvard, Vancouver, ISO, and other styles
5

Nam, Beomseok. "Distributed multidimensional indexing for scientific data analysis applications." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/6795.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Palmas, Gregorio. "Visual Analysis of Multidimensional Data for Biomechanics and HCI." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193713.

Full text
Abstract:
Multidimensional analysis is performed in many scientific fields.Its main tasks involve the identification of correlations between data dimensions,the investigation of data clusters, and the identification of outliers. Visualization techniques often help in getting a better understanding. In this thesis, we present our work on improving visual multidimensional analysis by exploiting the semantics of the data and enhancing the perception of existing visualizations. Firstly, we exploit the semantics of the data by creating new visualizations which present visual encodings specifically tailoredto the analyzed dimensions. We consider the resulting visual analysis to be more intuitive for the user asit provides a more easily understandable idea of the data. In this thesis we concentrate on the visual analysis of multidimensional biomechanical data for Human-Computer Interaction (HCI).To this end, we present new visualizations tackling the specific features of different aspectsof biomechanical data such as movement ergonomics, leading to a more intuitive analysis. Moreover, by integrating drawings or sketches of the physical setup of a case study as new visualizations, we allow for a fast and effective case-specific analysis. The creation of additional visualizations for communicating trends of clusters of movements enables a cluster-specific analysis which improves our understanding of postures and muscular co-activation.Moreover, we create a new visualization which addresses the specificity of the multidimensional data related to permutation-based optimization problems. Each permutation of a given set of n elements represents a point defined in an n-dimensional space. Our method approximates the topologyof the problem-related optimization landscape inferring the minima basins and their properties and visualizing them organized in a quasi-landscape. We show the variability of the solutions in a basin using heat maps generated from permutation matrices.Furthermore, we continue improving our visual multidimensional analysis by enhancing the perceptual encoding of existing well-known multidimensional visualizations. We focus on Parallel Coordinates Plots (PCP) and its derivative Continuous Parallel Coordinates (CPC). The main perceptual issues of PCP are visual clutter and overplotting which hamper the recognition of patterns in large data sets. In this thesis, we present an edge-bundling method for PCP which uses density-based clustering for each dimension. This reduces clutter and provides a faster overview of clusters and trends. Moreover, it allows for a fast rendering of the clustered lines using polygons. Furthermore, we present the first bundling approach for Continuous Parallel Coordinates where classic edge-bundling fails due to the absence of lines. Our method performs a deformation of the visualization space of CPC leading to similar results as those obtained through classic edge-bundling.Our work involved 10 HCI case studies and helped to establisha new research methodology in this field. This led to publications in internationally peer-reviewed journals and conference proceedings.

QC 20161011

APA, Harvard, Vancouver, ISO, and other styles
7

Odondi, Maurice Jacob. "Multidimensional analysis of successive categories (rating) data by dual scaling." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ28031.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weherage, Pradeep Peiris. "BigDataCube: Distributed Multidimensional Data Cube Over Apache Spark : An OLAP framework that brings Multidimensional Data Analysis to modern Distributed Storage Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215696.

Full text
Abstract:
Multidimensional Data Analysis is an important subdivision of Data Analytic paradigm. Data Cube provides the base abstraction for Multidimensional Data Analysis and helps in discovering useful insights of a dataset. On-Line Analytical Processing (OLAP) enhanced it to the next level supporting online responses to analytical queries with the underlying technique that precomputes (materializes) the data cubes. Data Cube Materialization is significant for OLAP, but it is an expensive task in term of data processing and storage. Most of the early decision support system benefits the value of multidimensional data analysis with a standard data architecture that extract, transform and load data from multiple data sources into a centralized database called Data Warehouse, on which OLAP engines provides the data cube abstraction. But this architecture and traditional OLAP engines do not hold with modern intensive datasets. Today, we have distributed data storage systems that keep data on a cluster of computer nodes, in which distributed data processing engines like MapReduce, Spark, Storm, etc. provide more ad-hoc style data analytical capabilities. Yet, there is no proper distributed system approach available for multidimensional data analysis, nor any distributed OLAP engine is available that follows distributed data cube materialization. It is essential to have a proper Distributed Data Cube Materialization mechanism to support multidimensional data analysis over the present distributed storage systems. Various research work available today which considered MapReduce for data cube materialization. Also, Apache Spark recently enabled CUBE operator as part of their DataFrame API. The thesis raises the problem statement, the best-distributed system approach for Data Cube Materialization, MapReduce or Spark? and contributes with experiments that compare the two distributed systems in materializing data cubes over the number of records, dimensions and cluster size. The results confirm Spark is more scalable and efficient in data cube materialization than MapReduce. The thesis further contributed with a novel framework, BigDataCube, which uses Spark DataFrames underneath for materializing data cubes and fulfills the need of multidimensional data analysis for modern distributed storage systems.
Multidimensional Data Analysis är en viktig del av Data Analytic paradigm. Data Cube tillhandahåller den grundläggade abstraktionen för Multidimensional Data Analysis och hjälper till att hitta användningsbara observationer av ett dataset. OnLine Analytical Processing (OLAP) lyfter det till nästa nivå och stödjer resultat från analytiska frågor i realtid med en underliggande teknik som materliserar Data Cubes. Data Cube Materialization är signifikant för OLAP, men är en kostsam uppgift vad gäller processa och lagra datat.De flesta av tidiga beslutssystem uppfyller Multidimensional Data Analysis med en standarddataarkitektur som extraherar, transformerar och läser data från flera datakällor in I en central databas, s.k. Data Warehouse, som exekveras av OLAP och tillhandahåller en Data Cube-abstraktion. Men denna arkitektur och tradionella OLAP-motorer klarar inte att hantera moderna högbelastade datasets. Idag har vi system med distribuerad datalagring, som har data på ett kluster av datornoder, med distribuerade dataprocesser, så som MapReduce, Spark, Storm etc. Dessa tillåter en mer ad-hoc dataanalysfunktionalitet. Än så länge så finns det ingen korrekt angreppsätt tillgänlig för Multidimensional Data Analysis eller någon distribuerad OLAP-motor som följer Distributed Data Cube Materialization.Det är viktigt att ha en korrekt Distributed Data Cube Materializationmekanism för att stödja Multidimensional Data Analysis för dagens distribuerade lagringssystem. Det finns många forskningarar idag som tittar på MapReduce för Data Cube Materialization. Nyligen har även Apache Spark tillgänglitgjort CUBE-operationer som en del av deras DataFrame API. Detta examensarbete tar upp frågeställningen, vilket som är det bästa angrepssättet för distribuerade system för Data Cube Materialization, MapReduce eller Spark. Arbetet bidrar dessutom med experiment som jämför de två distribuerade systemen i materialiserande datakubar över antalet poster, dimensioner och klusterstorlek. Examensarbetet bidrar även med ett mindre ramverk BigDataCube, som använder Spark DataFramesi bakgrunden för Data Cube Materialization och uppfyller behovet av Multidimensional Data Analysis av distribuerade lagringssystem.
APA, Harvard, Vancouver, ISO, and other styles
9

Jernberg, Robert, and Tobias Hultgren. "Flexible Data Extraction for Analysis using Multidimensional Databases and OLAP Cubes." Thesis, KTH, Data- och elektroteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123393.

Full text
Abstract:
Bright is a company that provides customer and employee satisfaction surveys, and uses this information to provide feedback to their customers. Data from the surveys are stored in a relational database and information is generated both by directly querying the database as well as doing analysis on extracted data. As the amount of data grows, generating this information takes increasingly more time. Extracting the data requires significant manual work and is in practice avoided. As this is not an uncommon issue, there is a substantial theoretical framework around the area. The aim of this degree project is to explore the different methods for achieving flexible and efficient data analysis on large amounts of data. This was implemented using a multidimensional database designed for analysis as well as an OnLine Analytical Processing (OLAP) cube built using Microsoft's SQL Server Analysis Services (SSAS). The cube was designed with the possibility to extract data on an individual level through PivotTables in Excel. The implemented prototype was analyzed, showing that the prototype consistently delivers correct results severalfold as efficient as the current solution as well as making new types of analysis possible and convenient. It is concluded that the use of an OLAP cube was a good choice for the issue at hand, and that the use of SSAS provided the necessary features for a functional prototype. Finally, recommendations on possible further developments were discussed.
Bright är ett företag som tillhandahåller undersökningar för kund- och medarbetarnöjdhet, och använder den informationen för att ge återkoppling till sina kunder. Data från undersökningarna sparas i en relationsdatabas och information genereras både genom att direkt fråga databasen såväl som att göra manuell analys på extraherad data. När mängden data ökar så ökar även tiden som krävs för att generera informationen. För att extrahera data krävs en betydande mängd manuellt arbete och i praktiken undviks det. Då detta inte är ett ovanligt problem finns det ett gediget teoretiskt ramverk kring området. Målet med detta examensarbete är att utforska de olika metoderna för att uppnå flexibel och effektiv dataanalys på stora mängder data. Det implementerades genom att använda en multidimensionell databas designad för analys samt en OnLine Analytical Processing (OLAP)-kub byggd med Microsoft SQL Server Analysis Services (SSAS). Kuben designades med möjligheten att extrahera data på en individuell nivå med PivotTables i Excel. Den implementerade prototypen analyserades vilket visade att prototypen konsekvent levererar korrekta resultat flerfaldigt så effektivt som den nuvarande lösningen såväl som att göra nya typer av analys möjliga och lättanvända. Slutsatsen dras att användandet av en OLAP-kub var ett bra val för det aktuella problemet, samt att valet att använda SSAS tillhandahöll de nödvändiga funktionaliteterna för en funktionell prototyp. Slutligen diskuterades rekommendationer av möjliga framtida utvecklingar.
APA, Harvard, Vancouver, ISO, and other styles
10

Johnson, Kevin J. "Strategies for chemometric analysis of gas chromatographic data /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/8513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kavasidis, Isaak. "Multifaceted analysis for medical data understanding: from data acquisition to multidimensional signal processing to knowledge discovery." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3925.

Full text
Abstract:
Large quantities of medical data are routinely generated each day in the form of text, images and time signals, making evident the need to develop new methodologies not only for the automatization of the processing and management of such data, but also for the deeper un- derstanding of the concepts hidden therein. The main problem that arises is that the acquired data cannot always be in an appropriate state or quality for quantitative analysis, and further processing is often necessary in order to enable automatic processing and manage- ment as well as to increase the accuracy of the results. Also, given the multimodal nature of medical data uniform approaches no longer apply and specific algorithm pipelines should be conceived and devel- oped for each case. In this dissertation we tackle some of the problems that occur in the medical domain regarding different data modalities and an attempt to understand the meaning of these data is made. These problems range from cortical brain signal acquisition and processing to X-Ray image analysis to text and genomics data-mining and subsequent knowledge discovery.
APA, Harvard, Vancouver, ISO, and other styles
12

Jansson, Mattias, and Jimmy Johansson. "Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1716.

Full text
Abstract:

This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137.

In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions.

In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize.

Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.

APA, Harvard, Vancouver, ISO, and other styles
13

Jardim, João Pedro Fernandes. "Airports efficiency evaluation based on MCDA and DEA multidimensional tools." Master's thesis, Universidade da Beira Interior, 2012. http://hdl.handle.net/10400.6/2011.

Full text
Abstract:
Airport benchmarking depends on airport operational performance and efficiency indicators, which are important issues for business, operational management, regulatory agencies, airlines and passengers. There are several sets of single and complex indicators to evaluate airports efficiency as well as several techniques to benchmark such infrastructures. The general aim of this work is the development of airport performance and efficiency predictive models using robust but flexible methodologies and incorporating simultaneously traditional indicators (number of movements and passengers, tons of cargo, number of runways and stands, area of terminals both of passenger and cargo) as well as new constraints as emerging situations and/or sudden natural phenomenon (ramp accidents and incidents, and volcano ashes and weather constraints, respectively). Firstly this work shows the efficiency evaluation of either a set of airports or the same airport along several years and under several constraints based on two multidimensional tools, Multicriteria Decision Analysis (MCDA, particularly through Measuring Attractiveness by a Categorical Based Evaluation Technique - MACBETH) and Data Envelopment Analysis (DEA). Secondly this work compares the obtained results using both MACBETH and DEA evidencing pros and cons of each multidimensional tool and searching for the best conditions to apply one or the other within airport management decision processes.
APA, Harvard, Vancouver, ISO, and other styles
14

Rossi, Rafael Germano. "Análise de componentes principais em data warehouses." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-07012018-182730/.

Full text
Abstract:
A técnica de Análise de Componentes Principais (PCA) tem como objetivo principal a descrição da variância e covariância entre um conjunto de variáveis. Essa técnica é utilizada para mitigar redundâncias no conjunto de variáveis e para redução de dimensionalidade em várias aplicações nas áreas científica, tecnológica e administrativa. Por outro lado, o modelo de dados multidimensionais é composto por relações de fato e dimensões (tabelas) que descrevem um evento usando métricas e a relação entre suas dimensões. No entanto, o volume de dados armazenados e a complexidade de suas dimensões geralmente envolvidas neste modelo, especialmente no ambiente de data warehouse, tornam a tarefa de interpretar a correlação entre dimensões muito difícil e às vezes impraticável. Neste trabalho, propomos o desenvolvimento de uma Interface de Programação de Aplicação (API) para a aplicação da PCA no modelo de dados multidimensionais para facilitar a tarefa de caracterização e redução de dimensionalidade, integrando essa técnica com ambientes de Data Warehouses. Para verificar a eficácia desta API, um estudo de caso foi realizado utilizando dados de produção científica e suas citações obtidas das Plataformas Lattes, Web of Science, Google Scholar e Scopus, fornecidas pela Superintendência de Tecnologia da Informação da Universidade de São Paulo.
The Principal Component Analysis (PCA) technique has as the main goal the description of the variance and covariance between a set of variables. This technique is used to mitigate redundancies in the set of variables and as a mean of achieving dimensional reduction in various applications in the scientific, technological and administrative areas. On the other hand, the multidimensional data model is composed by fact and dimension relations (tables) that describe an event using metrics and the relationship between their dimensions. However, the volume of data stored and the complexity of their dimensions usually involved in this model, specially in data warehouse environment, makes the correlation analyses between dimensions very difficult and sometimes impracticable. In this work, we propose the development of an Application Programming Interface (API) for the application of PCA on multidimensional data model in order to facilitate the characterization task and dimension reduction, integrating the technique with Data Warehouses environments. For verifying the effectiveness of this API, a case study was carried out using the scientific production data obtained from the Lattes Platform, the Web of Science, Google Scholar and Scopus, provided by the IT Superintendence at University of São Paulo.
APA, Harvard, Vancouver, ISO, and other styles
15

Kucuktunc, Onur. "Result Diversification on Spatial, Multidimensional, Opinion, and Bibliographic Data." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1374148621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ye, Jianguo. "Integrating data models, analysis and multidimensional visualizations : a unified construction project management arena." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/18030.

Full text
Abstract:
Recent trends in construction IT have introduced the ability to produce large, comprehensive, and integrated data sets describing each project. With emerging technologies, these data models can be exchanged between the computer tools that have traditionally been used to support the various construction applications. To date, however, it has not been possible for users to interact with the full range of integrated data in a way that allows them to configure custom views as needed to support ongoing management tasks. This barrier to working with integrated project data models can be decomposed into three categories of problems of IT application in the industry: 1) a data integration problem, 2) a data view configuration problem, and 3) an information presentation integration problem. These call for a new class of software environment named as an Information Aggregator. This dissertation explores computer technologies’ ability to work with integrated model-based project information and solve these three categories of problems. A Unified Construction Project Management Arena (UCPMA) is designed for an Information Aggregator, which allows access to the project information contained in the whole data set and facilitates flexible user-configuration of different views for different project management tasks by exploiting the technologies of data modeling and Industry Foundation Classes data standards, On-line Analytical Processing technologies, and information visualization. The UCPMA consists of three interrelated components corresponding to the three types of problems respectively: a Central Data Model, a Data Winnow, and a Visualization Configuration Model. These components leverage existing technologies and work together to deliver the UCPMA framework which promises benefits of data sharing for information integration, view sharing to incorporate disciplinary work, and dynamic data analysis in a flexible visual configuration environment. A UCPMA prototype system was developed to demonstrate the framework’s ability to fulfill these promises and potential in improving the current construction project management practice, through testing scenarios involving construction project change, risk management and quality control.
APA, Harvard, Vancouver, ISO, and other styles
17

Biswas, Ayan. "Uncertainty and Error Analysis in the Visualization of Multidimensional and Ensemble Data Sets." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1480605991395144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cochoy, Jérémy. "Decomposability and stability of multidimensional persistence." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS566/document.

Full text
Abstract:
Dans un contexte où des quantités toujours plus colossales de données sont disponibles,extraire des informations significatives et non triviales devient toujours plus difficile. Afin d’améliorer la classification, régression, ou encore l’analyse exploratoire de données, l’approche fournie par l’analyse topologique de données (TDA) est de rechercher la présence de formes dans le jeu de données.Dans cette thèse nous étudions les propriétés des modules de persistance multidimensionnelle dans le but d’obtenir une meilleure compréhension des sommandes et décompositions de ces derniers. Nous introduisons un foncteur qui plonge la catégorie des représentations de carquois dont le graphe est un arbre enraciné dans la catégorie des modules de persistance indexé sur ℝ². Nous enrichissons la structure de module de persistance provenant de l’application du foncteur cohomologie à une filtration en une structure d’algèbre de persistance.Enfin, nous généralisons l’approche de Crawley Beovey à la multipersistance et identifions une classe de modules de persistance indexé sur ℝ² qui possède des descripteurs simples et analogues au théorème de décomposition existant en persistance1-dimensionnelle
In a context where huge amounts of data are available, extracting meaningful and non trivial information is getting harder. In order to improve the tasks of classification, regression, or exploratory analysis, the approach provided by topological data analysisis to look for the presence of shapes in data set.In this thesis, we investigate the properties of multidimensional persistence modules in order to obtain a better understanding of the summands and decompositions of such modules. We introduce a functor that embeds the representations category of any quiver whose graph is a rooted tree into the category of ℝ²-indexed persistence modules. We also enrich the structure of persistence module arising from the cohomology of a filtration to a structure of persistence algebra.Finally, we generalize the approach of Crawley Beovey to multipersistence and identify a class of persistencemodules indexed on ℝ² which have simple descriptor and an analog of the decomposition theorem available in one dimensional persistence
APA, Harvard, Vancouver, ISO, and other styles
19

Sivertun, Åke. "Geographical Information Systems (GIS) as a tool for analysis and communications of multidimensional data." Doctoral thesis, Umeå universitet, Institutionen för geografi och ekonomisk historia, 1993. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-100703.

Full text
Abstract:
An integrating approach, including knowledge about whole systems of processes, is essential in order to reach both development and environmental protection goals. In this thesis Geographical Information Systems (GIS) are suggested as a tool to realise such integrated models. The main hypothesis in this work is that several natural technical and social systems that share a time-space can be compared and analysed in a GIS. My first objective was to analyze how GIS can support research, planning, and, more specifically, bring a broad scattering of competence together in an interdisciplinary process. In this process GIS was ivestigated as a tool to achieve models that give us a better overview of a problem, a better understanding for the processes involved, aid in foreseeing conflicts between interests, find ecological limits and assist in choosing countermeasures and monitor the result of different programs. The second objective concerns the requirement that models should be comparable and possible to include in other models and that they can be communicated to planners, politicians and the public. For this reason the possibilities to communicate the result and model components of multidimensional and multi-temporal data are investigated. Four examples on the possibilities and problems when using GIS in interdisciplinary studies are presented. In the examples, water plays a central role as a component in questions about development, management and environmental impact. The first articles focus on non-point source pollutants as a problem under growing attention when the big industrial and municipal point sources are brought under control. To manage non-point source pollutants, detailed knowledge about local conditions is required to facilitate precise advices on land use. To estimate the flow of metals and N(itrogen) in an area it is important to identify the soil moisture. Soil moisture changes over time but also significantly in the landscape according to several factors. Here a method is presented that calculate soil moisture over large areas. Man as a hydrologie factor has to be assessed to also understand the relative importance of anthropogen processes. To offer a supplement to direct measurements and add anthropogen factors, a GIS model is presented that takes soil-type, topography, vegetation, land-use, agricultural drainage and relative position in the watershed into account. A method to analyse and visualise development over time and space in the same model is presented in the last empirical study. The development of agricultural drainage can be discussed as a product of several forces here analyzed together and visualized with help of colour coded "Hyper pixels" and maps. Finally a discussion concerning the physiological and psychological possibilities to communicate multidimensional phenomena with the help of pictures and maps is held. The main conclusions in this theses are that GIS offer the possibilities to develop distributed models, e.g., models that calculate effects from a vide range of factors in larger areas and with a much higher spatial resolution than has been possible earlier. GIS also offer a possibility to integrate and communicate information from different disciplines to scientists, decision makers and the public.

Diss. (sammanfattning) Umeå : Umeå universitet, 1993, härtill 6 uppsatser.


digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
20

D’Errico, Marco <1974&gt. "Assessing poverty with survey data. Uni-dimensional, multidimensional and resilience poverty analysis in Kenya." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/4194/1/marco_derrico_tesi.pdf.

Full text
Abstract:
Traditionally Poverty has been measured by a unique indicator, income, assuming this was the most relevant dimension of poverty. Sen’s approach has dramatically changed this idea shedding light over the existence of many more dimensions and over the multifaceted nature of poverty; poverty cannot be represented by a unique indicator that only can evaluate a specific aspect of poverty. This thesis tracks an ideal path along with the evolution of the poverty analysis. Starting from the unidimensional analysis based on income and consumptions, this research enter the world of multidimensional analysis. After reviewing the principal approaches, the Foster and Alkire method is critically analyzed and implemented over data from Kenya. A step further is moved in the third part of the thesis, introducing a new approach to multidimensional poverty assessment: the resilience analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

D’Errico, Marco <1974&gt. "Assessing poverty with survey data. Uni-dimensional, multidimensional and resilience poverty analysis in Kenya." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/4194/.

Full text
Abstract:
Traditionally Poverty has been measured by a unique indicator, income, assuming this was the most relevant dimension of poverty. Sen’s approach has dramatically changed this idea shedding light over the existence of many more dimensions and over the multifaceted nature of poverty; poverty cannot be represented by a unique indicator that only can evaluate a specific aspect of poverty. This thesis tracks an ideal path along with the evolution of the poverty analysis. Starting from the unidimensional analysis based on income and consumptions, this research enter the world of multidimensional analysis. After reviewing the principal approaches, the Foster and Alkire method is critically analyzed and implemented over data from Kenya. A step further is moved in the third part of the thesis, introducing a new approach to multidimensional poverty assessment: the resilience analysis.
APA, Harvard, Vancouver, ISO, and other styles
22

Betancourt, Catalina. "Persistence heatmaps for knotted data sets." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6369.

Full text
Abstract:
Topological Data Analysis is a quickly expanding field but one particular subfield, multidimensional persistence, has hit a dead end. Although TDA is a very active field, it has been proven that the one-dimensional persistence used in persistent homology cannot be generalized to higher dimensions. With this in mind, progress can still be made in the accuracy of approximating it. The central challenge lies in the multiple persistence parameters. Using more than one parameter at a time creates a multi-filtration of the data which cannot be totally ordered in the way that a single filtration can. The goal of this thesis is to contribute to the development of persistence heat maps by replacing the persistent betti number function (PBN) defined by Xia and Wei in 2015 with a new persistence summary function, the accumulated persistence function (APF) defined by Biscio and Moller in 2016. The PBN function fails to capture persistence in most cases and thus their heat maps lack important information. The APF, on the other hand, does capture persistence that can be seen in their heat maps. A heat map is a way to visually describe three dimensions with two spatial dimensions and color. In two-dimensional persistence heat maps, the two chosen parameters lie on the x- and y- axes. These persistence parameters define a complex on the data, and its topology is represented by the color. We use the method of heat maps introduced by Xia and Wei. We acquired an R script from Matthew Pietrosanu to generate our own heat maps with the second parameter being curvature threshold. We also use the accumulated persistence function introduced by Biscio and Moller, who provided an R script to compute the APF on a data set. We then wrote new code, building from the existing codes, to create a modified heat map. In all the examples in this thesis, we show both the old PBN and the new APF heat maps to illustrate their differences and similarities. We study the two-dimensional heat maps with respect to curvature applied to two types of parameterized knots, Lissajous knots and torus knots. We also show how both heat maps can be used to compare and contrast data sets. This research is important because the persistence heat map acts as a guide for finding topologically significant features as the data changes with respect to two parameters. Improving the accuracy of the heat map ultimately improves the efficiency of data analysis. Two-dimensional persistence has practical applications in analyses of data coming from proteins and DNA. The unfolding of proteins offers a second parameter of configuration over time, while tangled DNA may have a second parameter of curvature. The concluding argument of this thesis is that using the accumulated persistence function in conjunction with the persistent betti number function provides a more accurate representation of two-dimensional persistence than the PBN heat map alone.
APA, Harvard, Vancouver, ISO, and other styles
23

Hall, Kristin Wynn. "Multiple Calibrations in Integrative Data Analysis: A Simulation Study and Application to Multidimensional Family Therapy." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4686.

Full text
Abstract:
A recent advancement in statistical methodology, Integrative Data Analyses (IDA Curran & Hussong, 2009) has led researchers to employ a calibration technique as to not violate an independence assumption. This technique uses a randomly selected, simplified correlational structured subset, or calibration, of a whole data set in a preliminary stage of analysis. However, a single calibration estimator suffers from instability, low precision and loss of power. To overcome this limitation, a multiple calibration (MC; Greenbaum et al., 2013; Wang et al., 2013) approach has been developed to produce better estimators, while still removing a level of dependency in the data as to not violate independence assumption. The MC method is conceptually similar to multiple imputation (MI; Rubin, 1987; Schafer, 1997), so MI estimators were borrowed for comparison. A simulation study was conducted to compare the MC and MI estimators, as well as to evaluate the performance of the operating characteristics of the methods in a cross classified data characteristic design. The estimators were tested in the context of assessing change over time in a longitudinal data set. Multiple calibrations consisting of a single measurement occasion per subject were drawn from a repeated measures data set, analyzed separately, and then combined by the rules set forth by each method to produce the final results. The data characteristics investigated were effect size, sample size, and the number of repeated measures per subject. Additionally, a real data application of an MC approach in an IDA framework was conducted on data from three completed, randomized controlled trials studying the treatment effects of Multidimensional Family Therapy (MDFT; Liddle et al., 2002) on substance use trajectories for adolescents at a one year follow-up. The simulation study provided empirical evidence of how the MC method preforms, as well as how it compares to the MI method in a total of 27 hypothetical scenarios. There were strong asymptotic tendencies observed for the bias, standard error, mean square error and relative efficiency of an MC estimator to approach the whole set estimators as the number of calibrations approached 100. The MI combination rules proved not appropriate to borrow for the MC case because the standard error formulas were too conservative and performance with respect to power was not robust. As a general suggestion, 5 calibrations are sufficient to produce an estimator with about half the bias of a single calibration estimator and at least some indication of significance, while 20 calibrations are ideal. After 20 calibrations, the contribution of an additional calibration to the combined estimator greatly diminished. The MDFT application demonstrated a successful implementation of 5 calibration approach in an IDA on real data, as well as the risk of missing treatment effects when analysis is limited to a single calibration's results. Additionally, results from the application provided evidence that MDFT interventions reduced the trajectories of substance use involvement at a 1-year follow-up to a greater extent than any of the active control treatment groups, overall and across all gender and ethnicity subgroups. This paper will aid researchers interested in employing a MC approach in an IDA framework or whenever a level of dependency in a data set needs to be removed for an independence assumption to hold.
APA, Harvard, Vancouver, ISO, and other styles
24

Xu, Bing. "Multidimensional approaches to performance evaluation of competing forecasting models." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4081.

Full text
Abstract:
The purpose of my research is to contribute to the field of forecasting from a methodological perspective as well as to the field of crude oil as an application area to test the performance of my methodological contributions and assess their merits. In sum, two main methodological contributions are presented. The first contribution consists of proposing a mathematical programming based approach, commonly referred to as Data Envelopment Analysis (DEA), as a multidimensional framework for relative performance evaluation of competing forecasting models or methods. As opposed to other performance measurement and evaluation frameworks, DEA allows one to identify the weaknesses of each model, as compared to the best one(s), and suggests ways to improve their overall performance. DEA is a generic framework and as such its implementation for a specific relative performance evaluation exercise requires a number of decisions to be made such as the choice of the units to be assessed, the choice of the relevant inputs and outputs to be used, and the choice of the appropriate models. In order to present and discuss how one might adapt this framework to measure and evaluate the relative performance of competing forecasting models, we first survey and classify the literature on performance criteria and their measures – including statistical tests – commonly used in evaluating and selecting forecasting models or methods. In sum, our classification will serve as a basis for the operationalisation of DEA. Finally, we test DEA performance in evaluating and selecting models to forecast crude oil prices. The second contribution consists of proposing a Multi-Criteria Decision Analysis (MCDA) based approach as a multidimensional framework for relative performance evaluation of the competing forecasting models or methods. In order to present and discuss how one might adapt such framework, we first revisit MCDA methodology, propose a revised methodological framework that consists of a sequential decision making process with feedback adjustment mechanisms, and provide guidelines as to how to operationalise it. Finally, we adapt such a methodological framework to address the problem of performance evaluation of competing forecasting models. For illustration purposes, we have chosen the forecasting of crude oil prices as an application area.
APA, Harvard, Vancouver, ISO, and other styles
25

Depalma, Carlos Mariano A. "The role of the thermal contact conductance in the interpretation of laser flash data in fiber-reinforced composites." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-10062009-020306/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ntushelo, Nombasa Sheroline. "Exploratory and inferential multivariate statistical techniques for multidimensional count and binary data with applications in R." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/17949.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2011.
ENGLISH ABSTRACT: The analysis of multidimensional (multivariate) data sets is a very important area of research in applied statistics. Over the decades many techniques have been developed to deal with such datasets. The multivariate techniques that have been developed include inferential analysis, regression analysis, discriminant analysis, cluster analysis and many more exploratory methods. Most of these methods deal with cases where the data contain numerical variables. However, there are powerful methods in the literature that also deal with multidimensional binary and count data. The primary purpose of this thesis is to discuss the exploratory and inferential techniques that can be used for binary and count data. In Chapter 2 of this thesis we give the detail of correspondence analysis and canonical correspondence analysis. These methods are used to analyze the data in contingency tables. Chapter 3 is devoted to cluster analysis. In this chapter we explain four well-known clustering methods and we also discuss the distance (dissimilarity) measures available in the literature for binary and count data. Chapter 4 contains an explanation of metric and non-metric multidimensional scaling. These methods can be used to represent binary or count data in a lower dimensional Euclidean space. In Chapter 5 we give a method for inferential analysis called the analysis of distance. This method use a similar reasoning as the analysis of variance, but the inference is based on a pseudo F-statistic with the p-value obtained using permutations of the data. Chapter 6 contains real-world applications of these above methods on two special data sets called the Biolog data and Barents Fish data. The secondary purpose of the thesis is to demonstrate how the above techniques can be performed in the software package R. Several R packages and functions are discussed throughout this thesis. The usage of these functions is also demonstrated with appropriate examples. Attention is also given to the interpretation of the output and graphics. The thesis ends with some general conclusions and ideas for further research.
AFRIKAANSE OPSOMMING: Die analise van meerdimensionele (meerveranderlike) datastelle is ’n belangrike area van navorsing in toegepaste statistiek. Oor die afgelope dekades is daar verskeie tegnieke ontwikkel om sulke data te ontleed. Die meerveranderlike tegnieke wat ontwikkel is sluit in inferensie analise, regressie analise, diskriminant analise, tros analise en vele meer verkennende data analise tegnieke. Die meerderheid van hierdie metodes hanteer gevalle waar die data numeriese veranderlikes bevat. Daar bestaan ook kragtige metodes in die literatuur vir die analise van meerdimensionele binêre en telling data. Die primêre doel van hierdie tesis is om tegnieke vir verkennende en inferensiële analise van binêre en telling data te bespreek. In Hoofstuk 2 van hierdie tesis bespreek ons ooreenkoms analise en kanoniese ooreenkoms analise. Hierdie metodes word gebruik om data in gebeurlikheidstabelle te analiseer. Hoofstuk 3 bevat tegnieke vir tros analise. In hierdie hoofstuk verduidelik ons vier gewilde tros analise metodes. Ons bespreek ook die afstand maatstawwe wat beskikbaar is in die literatuur vir binêre en telling data. Hoofstuk 4 bevat ’n verduideliking van metriese en nie-metriese meerdimensionele skalering. Hierdie metodes kan gebruik word om binêre of telling data in ‘n lae dimensionele Euclidiese ruimte voor te stel. In Hoofstuk 5 beskryf ons ’n inferensie metode wat bekend staan as die analise van afstande. Hierdie metode gebruik ’n soortgelyke redenasie as die analise van variansie. Die inferensie hier is gebaseer op ’n pseudo F-toetsstatistiek en die p-waardes word verkry deur gebruik te maak van permutasies van die data. Hoofstuk 6 bevat toepassings van bogenoemde tegnieke op werklike datastelle wat bekend staan as die Biolog data en die Barents Fish data. Die sekondêre doel van die tesis is om te demonstreer hoe hierdie tegnieke uitgevoer word in the R sagteware. Verskeie R pakette en funksies word deurgaans bespreek in die tesis. Die gebruik van die funksies word gedemonstreer met toepaslike voorbeelde. Aandag word ook gegee aan die interpretasie van die afvoer en die grafieke. Die tesis sluit af met algemene gevolgtrekkings en voorstelle vir verdere navorsing.
APA, Harvard, Vancouver, ISO, and other styles
27

Nebot, Romero María Victoria. "Scalable methods to analyze Semantic Web data." Doctoral thesis, Universitat Jaume I, 2013. http://hdl.handle.net/10803/396347.

Full text
Abstract:
Semantic Web data is currently being heavily used as a data representation format in scientific communities, social networks, business companies, news portals and other domains. The irruption and availability of Semantic Web data is demanding new methods and tools to efficiently analyze such data and take advantage of the underlying semantics. Although there exist some applications that make use of Semantic Web data, advanced analytical tools are still lacking, preventing the user from exploiting the attached semantics.
En la actualidad, tanto entre las comunidades científicas como en las empresas, así como en las redes sociales y otros dominios web, se emplean cada vez más datos anotados semánticamente, los cuales contribuyen al desarrollo de la Web Semántica. Dicho crecimiento de este tipo de datos requiere la creación de nuevos métodos y herramientas capaces de aprovechar la semántica subyacente para analizar los datos de forma eficiente. Aunque ya existen aplicaciones capaces de usar y gestionar datos anotados semánticamente, éstas no explotan la semántica para realizar análisis sofisticados.
APA, Harvard, Vancouver, ISO, and other styles
28

Ding, Guoxiang. "DERIVING ACTIVITY PATTERNS FROM INDIVIDUAL TRAVEL DIARY DATA: A SPATIOTEMPORAL DATA MINING APPROACH." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1236777859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Dias, Filipa de Carvalho. "Cluster analysis of financial time series data : evidence for portuguese and spanish stock markets." Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/14923.

Full text
Abstract:
Mestrado em Mathematical Finance
Esta dissertação utilizando a distância de Caiado & Crato (2010) baseada nas autocorrelações, pretende efectuar o agrupamento de séries financeiras temporais. A métrica tenta avaliar o nível de interdependência, tendo por base a previsibilidade dos retornos. A análise de $clusters$ é feita tendo em conta a estrutura hierárquica (dendrograma) e as coordenadas principais calculadas (mapa multidimensional) das séries financeiras. Estas técnicas foram utilizadas para investigar as semelhanças e diferenças entre as empresas dos dois índices ibéricos de mercado de ações: PSI-20 e IBEX-35.
This paper uses the Caiado & Crato (2010) autocorrelation-based distance metric for clustering financial time series. The metric attempts to assess the level of interdependence of time series from the return predictability point of view. The cluster analysis is made by looking to the hierarchical structure tree (or dendrogram) and the computed principal coordinates (multidimensional scaling map). These techniques are employed to investigate the similarities and dissimilarities between the stocks of the two Iberian stock market indexes: PSI-20 and IBEX-35.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
30

Johansson, Peter. "Plant Condition Measurement from Spectral Reflectance Data." Thesis, Linköping University, Computer Vision, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59286.

Full text
Abstract:

The thesis presents an investigation of the potential of measuring plant condition from hyperspectral reflectance data. To do this, some linear methods for embedding the high dimensional hyperspectral data and to perform regression to a plant condition space have been compared. A preprocessing step that aims at normalized illumination intensity in the hyperspectral images has been conducted and some different methods for this purpose have also been compared.A large scale experiment has been conducted where tobacco plants have been grown and treated differently with respect to watering and nutrition. The treatment of the plants has served as ground truth for the plant condition. Four sets of plants have been grown one week apart and the plants have been measured at different ages up to the age of about five weeks. The thesis concludes that there is a relationship between plant treatment and their leaves' spectral reflectance, but the treatment has to be somewhat extreme for enabling a useful treatment approximation from the spectrum. CCA has been the proposed method for calculation of the hyperspectral basis that is used to embed the hyperspectral data to the plant condition (treatment) space. A preprocessing method that uses a weighted normalization of the spectrums for illumination intensity normalization is concluded to be the most powerful of the compared methods.

APA, Harvard, Vancouver, ISO, and other styles
31

Primerano, Ilaria. "A symbolic data analysis approach to explore the relation between governance and performance in the Italian industrial districs." Doctoral thesis, Universita degli studi di Salerno, 2016. http://hdl.handle.net/10556/2179.

Full text
Abstract:
2013 - 2014
Nowadays, complex phenomena need to bee analyzed through appropriate statistical methods that allow considering the knowledge hidden behind the classical data structure... [edited by author]
XIII n.s.
APA, Harvard, Vancouver, ISO, and other styles
32

YONEDA, Hiroyuki, Ioki HARA, Takeshi FURUHASHI, Tomohiro YOSHIKAWA, Toshikazu FUKAMI, 洋之 米田, 以起 原, 武. 古橋, 大弘 吉川, and 俊和 深見. "可視空間上でのインタラクティブクラスタリングによるマイノリティ発見に関する検討." 日本感性工学会, 2009. http://hdl.handle.net/2237/20849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

FURUHASHI, Takeshi, Tomohiro YOSHIKAWA, Yosuke WATANABE, 武. 古橋, 大弘 吉川, and 庸佑 渡邉. "アンケートにおける回答の矛盾度・関心度の定量化およびそれらを考慮した解析手法に関する検討." 日本感性工学会, 2010. http://hdl.handle.net/2237/20851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Siepka, Damian. "Development of multidimensional spectral data processing procedures for analysis of composition and mixing state of aerosol particles by Raman and FTIR spectroscopy." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10188/document.

Full text
Abstract:
Les méthodologies de traitement de données multidimensionnelles peuvent considérablement améliorer la connaissance des échantillons. Les techniques spectroscopiques permettent l’analyse moléculaire avancée d’échantillons variés et complexes. La combinaison des techniques spectroscopiques aux méthodes de chimiométrie trouve des applications dans de nombreux domaines. Les particules atmosphériques affectent la qualité de l’air, la santé humaine, les écosystèmes et jouent un rôle important dans le processus de changement climatique. L’objectif de cette thèse a été de développer des outils de chimiométrie, simples d’utilisation, permettant de traiter un grand nombre de données spectrales provenant de l’analyse d’échantillons complexes par microspectrométrie Raman (RMS) et spectroscopie d’absorption IRTF. Dans un premier temps, nous avons développé une méthodologie combinant les méthodes de résolution de courbes et d’analyse multivariée afin de déterminer la composition chimique d’échantillons de particules analysées par RMS. Cette méthode appliquée à l’analyse de particules collectées dans les mines en Bolivie, a ouvert une nouvelle voie de description des échantillons. Dans un second temps, nous avons conçu un logiciel facilement accessible pour le traitement des données IRTF et Raman. Ce logiciel inclue plusieurs algorithmes de prétraitement ainsi que les méthodes d’analyse multivariées adaptées à la spectroscopie vibrationnelle. Il a été appliqué avec succès pour le traitement de données spectrales enregistrées pour divers échantillons (particules de mines de charbon, particules biogéniques, pigments organiques)
Sufficiently adjusted, multivariate data processing methods and procedures can significantly improve the process for obtaining knowledge of a sample composition. Spectroscopic techniques have capabilities for fast analysis of various samples and were developed for research and industrial purposes. It creates a great possibility for advanced molecular analysis of complex samples, such as atmospheric aerosols. Airborne particles affect air quality, human health, ecosystem condition and play an important role in the Earth’s climate system. The purpose of this thesis is twofold. On an analytical level, the functional algorithm for evaluation of quantitative composition of atmospheric particles from measurements of individual particles by Raman microspectrocopy (RMS) was established. On a constructive level, the readily accessible analytical system for Raman and FTIR data processing was developed. A potential of a single particle analysis by RMS has been exploited by an application of the designed analytical algorithm based on a combination between a multicurve resolution and a multivariate data treatment for an efficient description of chemical mixing of aerosol particles. The algorithm was applied to the particles collected in a copper mine in Bolivia and provides a new way of a sample description. The new user-friendly software, which includes pre-treatment algorithms and several easy-to access, common multivariate data treatments, is equipped with a graphical interface. The created software was applied to some challenging aspects of a pattern recognition in the scope of Raman and FTIR spectroscopy for coal mine particles, biogenic particles and organic pigments
APA, Harvard, Vancouver, ISO, and other styles
35

Siepka, Damian. "Development of multidimensional spectral data processing procedures for analysis of composition and mixing state of aerosol particles by Raman and FTIR spectroscopy." Electronic Thesis or Diss., Lille 1, 2017. http://www.theses.fr/2017LIL10188.

Full text
Abstract:
Les méthodologies de traitement de données multidimensionnelles peuvent considérablement améliorer la connaissance des échantillons. Les techniques spectroscopiques permettent l’analyse moléculaire avancée d’échantillons variés et complexes. La combinaison des techniques spectroscopiques aux méthodes de chimiométrie trouve des applications dans de nombreux domaines. Les particules atmosphériques affectent la qualité de l’air, la santé humaine, les écosystèmes et jouent un rôle important dans le processus de changement climatique. L’objectif de cette thèse a été de développer des outils de chimiométrie, simples d’utilisation, permettant de traiter un grand nombre de données spectrales provenant de l’analyse d’échantillons complexes par microspectrométrie Raman (RMS) et spectroscopie d’absorption IRTF. Dans un premier temps, nous avons développé une méthodologie combinant les méthodes de résolution de courbes et d’analyse multivariée afin de déterminer la composition chimique d’échantillons de particules analysées par RMS. Cette méthode appliquée à l’analyse de particules collectées dans les mines en Bolivie, a ouvert une nouvelle voie de description des échantillons. Dans un second temps, nous avons conçu un logiciel facilement accessible pour le traitement des données IRTF et Raman. Ce logiciel inclue plusieurs algorithmes de prétraitement ainsi que les méthodes d’analyse multivariées adaptées à la spectroscopie vibrationnelle. Il a été appliqué avec succès pour le traitement de données spectrales enregistrées pour divers échantillons (particules de mines de charbon, particules biogéniques, pigments organiques)
Sufficiently adjusted, multivariate data processing methods and procedures can significantly improve the process for obtaining knowledge of a sample composition. Spectroscopic techniques have capabilities for fast analysis of various samples and were developed for research and industrial purposes. It creates a great possibility for advanced molecular analysis of complex samples, such as atmospheric aerosols. Airborne particles affect air quality, human health, ecosystem condition and play an important role in the Earth’s climate system. The purpose of this thesis is twofold. On an analytical level, the functional algorithm for evaluation of quantitative composition of atmospheric particles from measurements of individual particles by Raman microspectrocopy (RMS) was established. On a constructive level, the readily accessible analytical system for Raman and FTIR data processing was developed. A potential of a single particle analysis by RMS has been exploited by an application of the designed analytical algorithm based on a combination between a multicurve resolution and a multivariate data treatment for an efficient description of chemical mixing of aerosol particles. The algorithm was applied to the particles collected in a copper mine in Bolivia and provides a new way of a sample description. The new user-friendly software, which includes pre-treatment algorithms and several easy-to access, common multivariate data treatments, is equipped with a graphical interface. The created software was applied to some challenging aspects of a pattern recognition in the scope of Raman and FTIR spectroscopy for coal mine particles, biogenic particles and organic pigments
APA, Harvard, Vancouver, ISO, and other styles
36

Nunes, Santiago Augusto. "Análise espaço-temporal de data streams multidimensionais." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17102016-152137/.

Full text
Abstract:
Fluxos de dados são usualmente caracterizados por grandes quantidades de dados gerados continuamente em processos síncronos ou assíncronos potencialmente infinitos, em aplicações como: sistemas meteorológicos, processos industriais, tráfego de veículos, transações financeiras, redes de sensores, entre outras. Além disso, o comportamento dos dados tende a sofrer alterações significativas ao longo do tempo, definindo data streams evolutivos. Estas alterações podem significar eventos temporários (como anomalias ou eventos extremos) ou mudanças relevantes no processo de geração da stream (que resultam em alterações na distribuição dos dados). Além disso, esses conjuntos de dados podem possuir características espaciais, como a localização geográfica de sensores, que podem ser úteis no processo de análise. A detecção dessas variações de comportamento que considere os aspectos da evolução temporal, assim como as características espaciais dos dados, é relevante em alguns tipos de aplicação, como o monitoramento de eventos climáticos extremos em pesquisas na área de Agrometeorologia. Nesse contexto, esse projeto de mestrado propõe uma técnica para auxiliar a análise espaço-temporal em data streams multidimensionais que contenham informações espaciais e não espaciais. A abordagem adotada é baseada em conceitos da Teoria de Fractais, utilizados para análise de comportamento temporal, assim como técnicas para manipulação de data streams e estruturas de dados hierárquicas, visando permitir uma análise que leve em consideração os aspectos espaciais e não espaciais simultaneamente. A técnica desenvolvida foi aplicada a dados agrometeorológicos, visando identificar comportamentos distintos considerando diferentes sub-regiões definidas pelas características espaciais dos dados. Portanto, os resultados deste trabalho incluem contribuições para a área de mineração de dados e de apoio a pesquisas em Agrometeorologia.
Data streams are usually characterized by large amounts of data generated continuously in synchronous or asynchronous potentially infinite processes, in applications such as: meteorological systems, industrial processes, vehicle traffic, financial transactions, sensor networks, among others. In addition, the behavior of the data tends to change significantly over time, defining evolutionary data streams. These changes may mean temporary events (such as anomalies or extreme events) or relevant changes in the process of generating the stream (that result in changes in the distribution of the data). Furthermore, these data sets can have spatial characteristics such as geographic location of sensors, which can be useful in the analysis process. The detection of these behavioral changes considering aspects of evolution, as well as the spatial characteristics of the data, is relevant for some types of applications, such as monitoring of extreme weather events in Agrometeorology researches. In this context, this project proposes a technique to help spatio-temporal analysis in multidimensional data streams containing spatial and non-spatial information. The adopted approach is based on concepts of the Fractal Theory, used for temporal behavior analysis, as well as techniques for data streams handling also hierarchical data structures, allowing analysis tasks that take into account the spatial and non-spatial aspects simultaneously. The developed technique has been applied to agro-meteorological data to identify different behaviors considering different sub-regions defined by the spatial characteristics of the data. Therefore, results from this work include contribution to data mining area and support research in Agrometeorology.
APA, Harvard, Vancouver, ISO, and other styles
37

Pagliosa, Lucas de Carvalho. "Visualização e exploração de dados multidimensionais na web." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08042016-103144/.

Full text
Abstract:
Com o crescimento do volume e dos tipos de dados, a necessidade de analisar e entender o que estes representam e como estão relacionados tem se tornado crucial. Técnicas de visualização baseadas em projeções multidimensionais ganharam espaço e interesse como uma das possíveis ferramentas de auxílio para esse problema, proporcionando um forma simples e rápida de identificar padrões, reconhecer tendências e extrair características antes não óbvias no conjunto original. No entanto, a projeção do conjunto de dados em um espaço de menor dimensão pode não ser suficiente, em alguns casos, para responder ou esclarecer certas perguntas feitas pelo usuário, tornando a análise posterior à projeção crucial para a correta interpretação da visualização observada. Logo, a interatividade, aplicada à necessidade do usuário, é uma fator essencial para análise. Neste contexto, este projeto de mestrado tem como principal objetivo criar metáforas visuais baseadas em atributos, através de medidas estatísticas e artefatos para detecção de ruídos e grupos similares, para auxiliar na exploração e análise dos dados projetados. Além disso, propõe-se disponibilizar, em navegadores Web, as técnicas de visualização de dados multidimensionais desenvolvidas pelo Grupo de Processamento Visual e Geométrico do ICMC-USP. O desenvolvimento do projeto como plataforma Web inspira-se na dificuldade de instalação e execução que certos projetos de visualização possuem, como problemas causados por diferentes versões de IDEs, compiladores e sistemas operacionais. Além disso, o fato do projeto estar disponível online para execução tem como propósito facilitar o acesso e a divulgação das técnicas propostas para o público geral.
With the growing number and types of data, the need to analyze and understand what they represent and how they are related has become crucial. Visualization techniques based on multidimensional projections have gained space and interest as one of the possible tools to aid this problem, providing a simple and quick way to identify patterns, recognize trends and extract features previously not obvious in the original set. However, the data set projection in a smaller space may not be sufficient in some cases to answer or clarify certain questions asked by the user, making the posterior projection analysis crucial for the exploration and understanding of the data. Thus, interactivity in the visualization, applied to the users needs, is an essential factor for analysis. In this context, this master projects main objective consists to create visual metaphors based on attributes, through statistical measures and artifacts for detecting noise and similar groups, to assist the exploration and analysis of projected data. In addition, it is proposed to make available, in Web browsers, the multidimensional data visualization techniques developed by the Group of Visual and Geometric Processing at ICMC-USP. The development of the project as a Web platform was inspired by the difficulty of installation and running that certain visualization projects have, mainly due different versions of IDEs, compilers and operating systems. In addition, the fact that the project is available online for execution aims to facilitate the access and dissemination of technical proposals for the general public.
APA, Harvard, Vancouver, ISO, and other styles
38

Krajča, Marek. "Město pro byznys: Vícerozměrná statistická analýza a možné návrhy na zdokonalení projektu." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193815.

Full text
Abstract:
The main objective of my diploma thesis is multidimensional data analysis. Analyzed data come from the comparative research Město pro byznys 2013 (Eng. translation: The city for business 2013). Another goal is to propose some changes that could improve the project. Used methods for multidimensional data analysis are exploratory analysis, principal component analysis, factor analysis and cluster analysis. Among others, for proposing some changes I use multi-criteria decision analysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Lin, Peng. "IRT vs. factor analysis approaches in analyzing multigroup multidimensional binary data the effect of structural orthogonality, and the equivalence in test structure, item difficulty, & examinee groups /." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8468.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Measurement, Statistics and Evaluation. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
40

Khan, Arif ul Maula [Verfasser], and R. [Akademischer Betreuer] Mikut. "Development of Robust and Efficient Algorithms for Image Processing and Analysis on Multidimensional Image Data using Feedback Concepts with Challenging Applications / Arif ul Maula Khan ; Betreuer: R. Mikut." Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1136660763/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Blanchard, Pierre. "Fast hierarchical algorithms for the low-rank approximation of matrices, with applications to materials physics, geostatistics and data analysis." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0016/document.

Full text
Abstract:
Les techniques avancées pour l’approximation de rang faible des matrices sont des outils de réduction de dimension fondamentaux pour un grand nombre de domaines du calcul scientifique. Les approches hiérarchiques comme les matrices H2, en particulier la méthode multipôle rapide (FMM), bénéficient de la structure de rang faible par bloc de certaines matrices pour réduire le coût de calcul de problèmes d’interactions à n-corps en O(n) opérations au lieu de O(n2). Afin de mieux traiter des noyaux d’interaction complexes de plusieurs natures, des formulations FMM dites ”kernel-independent” ont récemment vu le jour, telles que les FMM basées sur l’interpolation polynomiale. Cependant elles deviennent très coûteuses pour les noyaux tensoriels à fortes dimensions, c’est pourquoi nous avons développé une nouvelle formulation FMM efficace basée sur l’interpolation polynomiale, appelée Uniform FMM. Cette méthode a été implémentée dans la bibliothèque parallèle ScalFMM et repose sur une grille d’interpolation régulière et la transformée de Fourier rapide (FFT). Ses performances et sa précision ont été comparées à celles de la FMM par interpolation de Chebyshev. Des simulations numériques sur des cas tests artificiels ont montré que la perte de précision induite par le schéma d’interpolation était largement compensées par le gain de performance apporté par la FFT. Dans un premier temps, nous avons étendu les FMM basées sur grille de Chebyshev et sur grille régulière au calcul des champs élastiques isotropes mis en jeu dans des simulations de Dynamique des Dislocations (DD). Dans un second temps, nous avons utilisé notre nouvelle FMM pour accélérer une factorisation SVD de rang r par projection aléatoire et ainsi permettre de générer efficacement des champs Gaussiens aléatoires sur de grandes grilles hétérogènes. Pour finir, nous avons développé un algorithme de réduction de dimension basé sur la projection aléatoire dense afin d’étudier de nouvelles façons de caractériser la biodiversité, à savoir d’un point de vue géométrique
Advanced techniques for the low-rank approximation of matrices are crucial dimension reduction tools in many domains of modern scientific computing. Hierarchical approaches like H2-matrices, in particular the Fast Multipole Method (FMM), benefit from the block low-rank structure of certain matrices to reduce the cost of computing n-body problems to O(n) operations instead of O(n2). In order to better deal with kernels of various kinds, kernel independent FMM formulations have recently arisen such as polynomial interpolation based FMM. However, they are hardly tractable to high dimensional tensorial kernels, therefore we designed a new highly efficient interpolation based FMM, called the Uniform FMM, and implemented it in the parallel library ScalFMM. The method relies on an equispaced interpolation grid and the Fast Fourier Transform (FFT). Performance and accuracy were compared with the Chebyshev interpolation based FMM. Numerical experiments on artificial benchmarks showed that the loss of accuracy induced by the interpolation scheme was largely compensated by the FFT optimization. First of all, we extended both interpolation based FMM to the computation of the isotropic elastic fields involved in Dislocation Dynamics (DD) simulations. Second of all, we used our new FMM algorithm to accelerate a rank-r Randomized SVD and thus efficiently generate multivariate Gaussian random variables on large heterogeneous grids in O(n) operations. Finally, we designed a new efficient dimensionality reduction algorithm based on dense random projection in order to investigate new ways of characterizing the biodiversity, namely from a geometric point of view
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Wei. "Directed Evolution of Glutathione Transferases with Altered Substrate Selectivity Profiles : A Laboratory Evolution Study Shedding Light on the Multidimensional Nature of Epistasis." Doctoral thesis, Uppsala universitet, Biokemi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-158400.

Full text
Abstract:
Directed evolution is generally regarded as a useful approach in protein engineering. By subjecting members of a mutant library to the power of Darwinian evolution, desired protein properties are obtained. Numerous reports have appeared in the literature showing the success of tailoring proteins for various applications by this method. Is it a one-way track that protein practitioners can only learn from nature to enable more efficient protein engineering? A structure-and-mechanism-based approach, supplemented with the use of reduced amino acid alphabets, was proposed as a general means for semi-rational enzyme engineering. Using human GST A2-2*E, the most active human enzyme in the bioactivation of azathioprine, as a parental enzyme to test this approach, a L107G/L108D/F222H triple-point mutant of GST A2-2*E (thereafter designated as GDH) was discovered with 70-fold increased activity, approaching the upper limit of specific activity of the GST scaffold. The approach was further experimentally verified to be more successful than intuitively choosing active-site residues in proximity to the bound substrate for the improvement of enzyme performance. By constructing all intermediates along all putative mutational paths leading from GST A2-2*E to mutant GDH and assaying them with nine alternative substrates, the fitness landscapes were found to be “rugged” in differential fashions in substrate-activity space. The multidimensional fitness landscapes stemming from functional promiscuity can lead to alternative outcomes with enzymes optimized for other features than the selectable markers that were relevant at the origin of the evolutionary process. The results in this thesis suggest that in this manner an evolutionary response to changing environmental conditions can readily be mounted. In summary, the thesis demonstrates the attractive features of the structure-and-mechanism-based semi-rational directed evolution approach for optimizing enzyme performance. Moreover, the results gained from the studies show that laboratory evolution may refine our understanding of evolutionary process in nature.
APA, Harvard, Vancouver, ISO, and other styles
43

Pan, Jie. "Modélisation et exécution des applications d'analyse de données multi-dimentionnelles sur architectures distribuées." Phd thesis, Ecole Centrale Paris, 2010. http://tel.archives-ouvertes.fr/tel-00579125.

Full text
Abstract:
Des quantités de données colossalles sont générées quotidiennement. Traiter de grands volumes de données devient alors un véritable challenge pour les logiciels d'analyse des données multidimensionnelles. De plus, le temps de réponse exigé par les utilisateurs de ces logiciels devient de plus en plus court, voire intéractif. Pour répondre à cette demande, une approche basée sur le calcul parallèle est une solution. Les approches traditionnelles reposent sur des architectures performantes, mais coûteuses, comme les super-calculateurs. D'autres architectures à faible coût sont également disponibles, mais les méthodes développées sur ces architectures sont souvent bien moins efficaces. Dans cette thèse, nous utilisons un modèle de programmation parallèle issu du Cloud Computing, dénommé MapReduce, pour paralléliser le traitement des requêtes d'analyse de données multidimensionnelles afin de bénéficier de mécanismes de bonne scalabilité et de tolérance aux pannes. Dans ce travail, nous repensons les techniques existantes pour optimiser le traitement de requête d'analyse de données multidimensionnelles, y compris les étapes de pré-calcul, d'indexation, et de partitionnement de données. Nous avons aussi résumé le parallélisme de traitement de requêtes. Ensuite, nous avons étudié le modèle MapReduce en détail. Nous commençons par présenter le principe de MapReduce et celles du modèle étendu, MapCombineReduce. En particulier, nous analysons le coût de communication pour la procédure de MapReduce. Après avoir présenté le stockage de données qui fonctionne avec MapReduce, nous présentons les caractéristiques des applications de gestion de données appropriées pour le Cloud Computing et l'utilisation de MapReduce pour les applications d'analyse de données dans les travaux existants. Ensuite, nous nous concentrons sur la parallélisation des Multiple Group-by query, une requête typique utilisée dans l'exploration de données multidimensionnelles. Nous présentons la mise en oeuvre de l'implémentation initiale basée sur MapReduce et une optimisation basée sur MapCombineReduce. Selon les résultats expérimentaux, notre version optimisée montre un meilleur speed-up et une meilleure scalabilité que la version initiale. Nous donnons également une estimation formelle du temps d'exécution pour les deux implémentations. Afin d'optimiser davantage le traitement du Multiple Group-by query, une phase de restructuration de données est proposée pour optimiser les jobs individuels. Nous re-definissons l'organisation du stockage des données, et nous appliquons les techniques suivantes, le partitionnement des données, l'indexation inversée et la compression des données, au cours de la phase de restructuration des données. Nous redéfinissons les calculs effectués dans MapReduce et dans l'ordonnancement des tâches en utilisant cette nouvelle structure de données. En nous basant sur la mesure du temps d'exécution, nous pouvons donner une estimation formelle et ainsi déterminer les facteurs qui impactent les performances, telles que la sélectivité de requête, le nombre de mappers lancés sur un noeud, la distribution des données " hitting ", la taille des résultats intermédiaires, les algorithmes de sérialisation adoptée, l'état du réseau, le fait d'utiliser ou non le combiner, ainsi que les méthodes adoptées pour le partitionnement de données. Nous donnons un modèle d'estimation des temps d'exécution et en particulier l'estimation des valeurs des paramètres différents pour les exécutions utilisant le partitionnement horizontal. Afin de soutenir la valeur-unique-wise-ordonnancement, qui est plus flexible, nous concevons une nouvelle structure de données compressées, qui fonctionne avec un partitionnement vertical. Cette approche permet l'agrégation sur une certaine valeur dans un processus continu.
APA, Harvard, Vancouver, ISO, and other styles
44

Richert, Laura. "Trial design and analysis of endpoints in HIV vaccine trials." Thesis, Bordeaux 2, 2013. http://www.theses.fr/2013BOR22048/document.

Full text
Abstract:
Des données complexes sont fréquentes dans les essais cliniques récents et nécessitent des méthodes statistiques adaptées. La recherche vaccinale du VIH est un exemple d’un domaine avec des données complexes et une absence de critères de jugement validés dans les essais précoces. Cette thèse d’Université concerne des recherches méthodologiques sur la conception et les aspects statistiques des essais cliniques vaccinaux du VIH, en particulier sur les critères de jugement d’immunogénicité et les schémas d’essai de phase I-II. A l’aide des données cytokiniques multiplex, nous illustrons les aspects méthodologiques spécifiques à une technique de mesure. Nous proposons ensuite des définitions de critères de jugement et des méthodes statistiques adéquates pour l'analyse des données d'immunogénicité multidimensionnelles. En particulier, nous montrons l’intérêt des scores multivariés non-paramétriques, permettant de résumer l’information à travers différents marqueurs d’immunogénicité et de faire des comparaisons inter- et intra-groupe. Dans l’objectif de contribuer à la conception méthodologique des nouveaux essais vaccinaux, nous présentons la construction d’un schéma d’essai optimisé pour le développement clinique précoce. En imbriquant les phases I et II d’évaluation clinique, ce schéma permet d’accélerer le développement de plusieurs stratégies vaccinales en parallèle. L’intégration d’une règle d’arrêt est proposée dans des perspectives fréquentistes et Bayesiennes. Les méthodes mises en avant dans cette thèse sont transposables à d’autres domaines d’application avec des données complexes, telle que les données d’imagerie ou les essais d’autres immunothérapies
Complex data are frequently recored in recent clinical trials and require the use of appropriate statistical methods. HIV vaccine research is an example of a domaine with complex data and a lack of validated endpoints for early-stage clinical trials. This thesis concerns methodological research with regards to the design and analysis aspects of HIV vaccine trials, in particular the definition of immunogenicity endpoints and phase I-II trial designs. Using cytokine multiplex data, we illustrate the methodological aspects specific to a given assay technique. We then propose endpoint definitions and statistical methods appropriate for the analysis of multidimensional immunogenicity data. We show in particular the value of non-parametric multivariate scores, which allow for summarizing information across different immunogenicity markers and for making statistical comparisons between and within groups. In the aim of contributing to the design of new vaccine trials, we present the construction of an optimized early-stage HIV vaccine design. Combining phase I and II assessments, the proposed design allows for accelerating the clinical development of several vaccine strategies in parallel. The integration of a stopping rule is proposed from both a frequentist and a Bayesian perspective. The methods advocated in this thesis are transposable to other research domains with complex data, such as imaging data or trials of other immune therapies
APA, Harvard, Vancouver, ISO, and other styles
45

FERREIRA, MATHEUS C. "Obtenção de fritas vitroceramicas a partir de resíduos sólidos industriais." reponame:Repositório Institucional do IPEN, 2006. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11469.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:52:14Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:01:15Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
46

Rusch, Thomas, Patrick Mair, and Kurt Hornik. "COPS Cluster Optimized Proximity Scaling." WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4465/1/COPS.pdf.

Full text
Abstract:
Proximity scaling (i.e., multidimensional scaling and related methods) is a versatile statistical method whose general idea is to reduce the multivariate complexity in a data set by employing suitable proximities between the data points and finding low-dimensional configurations where the fitted distances optimally approximate these proximities. The ultimate goal, however, is often not only to find the optimal configuration but to infer statements about the similarity of objects in the high-dimensional space based on the the similarity in the configuration. Since these two goals are somewhat at odds it can happen that the resulting optimal configuration makes inferring similarities rather difficult. In that case the solution lacks "clusteredness" in the configuration (which we call "c-clusteredness"). We present a version of proximity scaling, coined cluster optimized proximity scaling (COPS), which solves the conundrum by introducing a more clustered appearance into the configuration while adhering to the general idea of multidimensional scaling. In COPS, an arbitrary MDS loss function is parametrized by monotonic transformations and combined with an index that quantifies the c-clusteredness of the solution. This index, the OPTICS cordillera, has intuitively appealing properties with respect to measuring c-clusteredness. This combination of MDS loss and index is called "cluster optimized loss" (coploss) and is minimized to push any configuration towards a more clustered appearance. The effect of the method will be illustrated with various examples: Assessing similarities of countries based on the history of banking crises in the last 200 years, scaling Californian counties with respect to the projected effects of climate change and their social vulnerability, and preprocessing a data set of hand written digits for subsequent classification by nonlinear dimension reduction. (authors' abstract)
Series: Discussion Paper Series / Center for Empirical Research Methods
APA, Harvard, Vancouver, ISO, and other styles
47

FRIMAIO, AUDREW. "Desenvolvimento de um material cerâmico para utilização em proteção radiológica diagnóstica." reponame:Repositório Institucional do IPEN, 2006. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11414.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:51:38Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:08:36Z (GMT). No. of bitstreams: 1 11345.PDF: 3947420 bytes, checksum: a67742a2939ab92c6ea3c83950452caf (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
48

Ferreira, Matheus Chianca. "Obtenção de fritas vitrocerâmicas a partir de resíduos sólidos industriais." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/85/85134/tde-14052012-111305/.

Full text
Abstract:
O resíduo estudado neste trabalho é originado do processo de obtenção de alumínio metálico, de grande interesse no Brasil pelo fato do país ser detentor de algumas das maiores reservas do mineral bauxita no mundo, utilizado como fonte de alumínio. Tendo como estratégia a geração de resíduo zero, colaborando para as tecnologias ambientalmente amigáveis, este trabalho estuda a incorporação de um resíduo resultante da recuperação de alumínio presente na escória gerada durante o processo de produção primária do alumínio metálico, por plasma térmico. Utilizando-se o diagrama de equilíbrio de fases do sistema Al2O3-CaO-SiO2, fez-se a adequação das composições visando a incorporação de resíduo no produto cerâmico sem alterar as características de processamento do material. A obtenção de vidros e de fritas vitrocerâmicas com o resíduo borra branca foi realizada fazendo-se a fusão das composições calculadas e, para os vitrocerâmicos, tratamento térmico posterior de devitrificação. Os produtos obtidos foram caracterizados utilizando-se técnicas de análise tais como difração de raios X (DRX), microscopia eletrônica de varredura (MEV) e espectroscopia do infravermelho (FTIR). Foi possível obter material vitrocerâmico com até 30% de resíduo de alumínio, após a fusão a 1300°C e devitrificação a 900°C. Em adição, o resíduo demonstrou ser um promissor material auxiliar na formação de fases cristalinas em baixos tempos de tratamento térmico.
This work studies the residue obtained from the process of aluminum metal extraction activities, a great interest process, because of Brazil own some of the biggest bauxite mineral reserves in all the world. As a useful choice for no residue generation, and a support for environmentally friendly technologies, this work studies the white dross residue (WDR), from the process of aluminum metal reduction by thermal plasma. The phase equilibrium diagram of Al2O3-Ca O-SiO2 system was used to calculate the compositions. The WDR were incorporated in a ceramic product without modifying its principal characteristics. The fusion and devitrification treatments were studied. XRD (X-ray diffractometry), SEM (scanning electron microscopy) and FTIR (transformed Fourier infrared) were used to investigate the glass and glassceramic samples. These techniques showed that is possible to get glassceramic with up to 30 mass% of WDR after molten at 1300 deg C and annealed at 900 deg C. In addition, the WDR showed to be a promising material in attainment of crystalline phases in less times of heat treatment for annealing.
APA, Harvard, Vancouver, ISO, and other styles
49

Voillet, Valentin. "Approche intégrative du développement musculaire afin de décrire le processus de maturation en lien avec la survie néonatale." Thesis, Toulouse, INPT, 2016. http://www.theses.fr/2016INPT0067/document.

Full text
Abstract:
Depuis plusieurs années, des projets d'intégration de données omiques se sont développés, notamment avec objectif de participer à la description fine de caractères complexes d'intérêt socio-économique. Dans ce contexte, l'objectif de cette thèse est de combiner différentes données omiques hétérogènes afin de mieux décrire et comprendre le dernier tiers de gestation chez le porc, période influençant la mortinatalité porcine. Durant cette thèse, nous avons identifié les bases moléculaires et cellulaires sous-jacentes de la fin de gestation, en particulier au niveau du muscle squelettique. Ce tissu est en effet déterminant à la naissance car impliqué dans l'efficacité de plusieurs fonctions physiologiques comme la thermorégulation et la capacité à se déplacer. Au niveau du plan expérimental, les tissus analysés proviennent de foetus prélevés à 90 et 110 jours de gestation (naissance à 114 jours), issus de deux lignées extrêmes pour la mortalité à la naissance, Large White et Meishan, et des deux croisements réciproques. Au travers l'application de plusieurs études statistiques et computationnelles (analyses multidimensionnelles, inférence de réseaux, clustering et intégration de données), nous avons montré l'existence de mécanismes biologiques régulant la maturité musculaire chez les porcelets, mais également chez d'autres espèces d'intérêt agronomique (bovin et mouton). Quelques gènes et protéines ont été identifiées comme étant fortement liées à la mise en place du métabolisme énergétique musculaire durant le dernier tiers de gestation. Les porcelets ayant une immaturité du métabolisme musculaire seraient sujets à un plus fort risque de mortalité à la naissance. Un second volet de cette thèse concerne l'imputation de données manquantes (tout un groupe de variables pour un individu) dans les méthodes d'analyses multidimensionnelles, comme l'analyse factorielle multiple (AFM) (ou multiple factor analysis (MFA)). Dans notre contexte, l'AFM fut particulièrement intéressante pour l'intégration de données d'un ensemble d'individus sur différents tissus (deux ou plus). Afin de conserver ces individus manquants pour tout un groupe de variables, nous avons développé une méthode, appelée MI-MFA (multiple imputation - MFA), permettant l'estimation des composantes de l'AFM pour ces individus manquants
Over the last decades, some omics data integration studies have been developed to participate in the detailed description of complex traits with socio-economic interests. In this context, the aim of the thesis is to combine different heterogeneous omics data to better describe and understand the last third of gestation in pigs, period influencing the piglet mortality at birth. In the thesis, we better defined the molecular and cellular basis underlying the end of gestation, with a focus on the skeletal muscle. This tissue is specially involved in the efficiency of several physiological functions, such as thermoregulation and motor functions. According to the experimental design, tissues were collected at two days of gestation (90 or 110 days of gestation) from four fetal genotypes. These genotypes consisted in two extreme breeds for mortality at birth (Meishan and Large White) and two reciprocal crosses. Through statistical and computational analyses (descriptive analyses, network inference, clustering and biological data integration), we highlighted some biological mechanisms regulating the maturation process in pigs, but also in other livestock species (cattle and sheep). Some genes and proteins were identified as being highly involved in the muscle energy metabolism. Piglets with a muscular metabolism immaturity would be associated with a higher risk of mortality at birth. A second aspect of the thesis was the imputation of missing individual row values in the multidimensional statistical method framework, such as the multiple factor analysis (MFA). In our context, MFA was particularly interesting in integrating data coming from the same individuals on different tissues (two or more). To avoid missing individual row values, we developed a method, called MI-MFA (multiple imputation - MFA), allowing the estimation of the MFA components for these missing individuals
APA, Harvard, Vancouver, ISO, and other styles
50

Roussafi, Ferdaous. "La territorialisation des énergies renouvelables en France." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC045.

Full text
Abstract:
La transition énergétique vers des énergies bas-carbone est aujourd’hui un paradigme dominant des politiques publiques liées à l’énergie. Elle constitue un axe de travail central pour les régions françaises. L’évolution de leur rapport à l’énergie se situe à la fois en situation de filiation avec les expériences européennes et se place dans le sillage d’une incitation nationale à la transition.Cette thèse a pour objectif d’étudier la territorialisation de la production des énergies renouvelables (EnR) selon ses différentes origines (biomasse, solaire, géothermie, éolien et hydraulique). Nous proposons d’évaluer les performances des régions en matière de diversification du bouquet énergétique en 2015 et sur la période 1990-2015. Des méthodes d’analyse de données multidimensionnelles ont été employées. Une typologie des régions françaises caractéristique du développement régional des énergies renouvelables (EnR) en France en 2015 est proposée, elle met en évidence l’émergence de cinq profils types de développement des EnR très contrastés selon les filières d’EnR. L’analyse des données évolutives adoptée pour étudierla dynamique régionaleen termes de promotion des EnR sur la période 1990-2015met en évidence quatre sous-périodes de développement des EnR.La classification Ascendante Hiérarchique (CAH) sur chaque sous-périodea mis en exerguetrois types distincts de profils de développement des EnR et une certaine stabilité dans les trajectoires des régions. Cette structure très stable révèle que les disparités entre les régions au début des années 1990 ont persisté tout au long de la période.Enfin, l’étude des déterminants de la consommation des EnR à l’échelle régionale a permis d’identifier les principaux leviers favorisant leur déploiement. En effet, nous avons montré à travers l’estimation d’un modèle VECM, qu’à court terme, la croissance économique à la période passée mesurée par le taux de croissance du PIB réel affecte positivement la consommation des EnR, alors que les productions nucléaire et industrielle par habitant ont un impact négatif. A long terme, les estimations des modèles FM-OLS et DOLS indiquent que le niveau de développement de l’économie, mesurée par le logarithme du PIB par habitant impacte positivement la part d’EnR dans la consommation finale d’énergie. Les résultats montrent également que les dépenses en recherche et développement favorisent le recours aux EnR qui dépendent en grande partie de la densité de la population. Enfin, nous montrons qu’à l’échelle régionale, le poids des partis « verts » influence positivement l’essor des EnR
The energy transition to low-carbon energy is now a dominant paradigm in energy-related public policies. It is a central focus of work for the French regions. The evolution of their relationship with energy is both in line with European experiences and in the wake of a national incentive for transition. The objective of this thesis is to study the territorialization of renewable energy (RE) production according to its different origins (biomass, solar, geothermal, wind and hydro). We propose to assess the regions' performance in diversifying the energy mix in 2015 and over the period 1990-2015. Multidimensional data analysis methods were used. A typology of French regions characteristic of the regional development of renewable energies (RE) in France in 2015 is proposed, it highlights the emergence of five typical RE development profiles that are highly contrasted according to the RE sectors. The analysis of evolving data adopted to study regional dynamics in terms of RE promotion over the period 1990-2015 highlights four sub-periods of RE development. The Hierarchical Ascending Classification (HAC) over each sub-period has highlighted three distinct types of RE development profiles and a certain stability in the trajectories of the regions. This very stable structure shows that disparities between regions in the early 1990s persisted throughout the period. Finally, the study of the determinants of RE consumption at the regional level made it possible to identify the main levers favouring their deployment. Indeed, we have shown through the estimation of a VECM model that in the short term, past economic growth measured by the real GDP growth rate positively affects RE consumption, while nuclear and industrial production per capita have a negative impact. In the long term, estimates from the FM-OLS and DOLS models indicate that the level of economic development, measured by the logarithm of GDP per capita, has a positive impact on the share of RE in final energy consumption. The results also show that research and development spending favours the use of REs, which are largely dependent on population density. Finally, we show that at the regional level, the weight of "green" parties has a positive influence on the development of renewable energies
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography