Дисертації з теми "OLAP Systems"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: OLAP Systems.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "OLAP Systems".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kotsis, Nikolaos. "Multidimensional aggregation in OLAP systems." Thesis, University of Strathclyde, 2000. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21149.

Повний текст джерела
Анотація:
On-line analytical processing (OLAP) provides multidimensional data analysis to support decision making. OLAP queries require extensive computation based on aggregation along many dimensions and hierarchies. The time required to process these queries has traditionally prevented the interactive analysis of large databases and in order to accelerate query-response time, precomputed results are often stored as materialised views for later retrieval. This adds a prohibitive storage overhead when applied to the whole set of aggregates, known as the data cube. Storage space and computation time can be significantly reduced by partial computation. The challenge in implementing the data cube has been to select the minimum number of views for materialisation, while retaining fast query response time. This thesis makes significant contributions to this area by introducing the Low Redundancy (L-R) approach which provides the means for the selection, computation and storage of nonredu ndant aggregates. Firstly, through the introduction of a novel technique, redundant aggregates are identified thus allowing only distinct aggregates to be computed and stored. Secondly, further redundancy is identified and eliminated using a second novel technique which stores these distinct aggregates in a compact differential form. Novel algorithms were introduced to implement these techniques and provide a solution which is both scalable and low in complexity. Both techniques have been evaluated using real and synthetic datasets with experimental results, and have achieved significant savings in computation time and storage space compared to the conventional approach. Savings have been shown to increase as dimensionality increases. Existing techniques for implementing the data cube differ from the L-R approach but they can be integrated with it to achieve faster query-response time. Finally, the implications of this work reach beyond the area of OLAP to the fields of decision support systems, user interfaces and data mining.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Aho, Milja. "Optimisation of Ad-hoc analysis of an OLAP cube using SparkSQL." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-329938.

Повний текст джерела
Анотація:
An Online Analytical Processing (OLAP) cube is a way to represent a multidimensional database. The multidimensional database often uses a star schema and populates it with the data from a relational database. The purpose of using an OLAP cube is usually to find valuable insights in the data like trends or unexpected data and is therefore often used within Business intelligence (BI). Mondrian is a tool that handles OLAP cubes that uses the query language MultiDimensional eXpressions (MDX) and translates it to SQL queries. Apache Kylin is an engine that can be used with Apache Hadoop to create and query OLAP cubes with an SQL interface. This thesis investigates whether the engine Apache Spark running on a Hadoop cluster is suitable for analysing OLAP cubes and what performance that can be expected. The Star Schema Benchmark (SSB) has been used to provide Ad-Hoc queries and to create a large database containing over 1.2 billion rows. This database was created in a cluster in the Omicron office consisting of five worker nodes and one master node. Queries were then sent to the database using Mondrian integrated into the BI platform Pentaho. Amazon Web Services (AWS) has also been used to create clusters with 3, 6 and 15 slaves to see how the performance scales. Creating a cube in Apache Kylin on the Omicron cluster was also tried, but was not possible due to the cluster running out of memory. The results show that it took between 8.2 to 11.9 minutes to run the MDX queries on the Omicron cluster. On both the Omicron cluster and the AWS cluster, the SQL queries ran faster than the MDX queries. The AWS cluster ran the queries faster than the Omicron cluster, even though fewer nodes were used. It was also noted that the AWS cluster did not scale linearly, neither for the MDX nor the SQL queries.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Maknytė, Lina. "Intelektualių veslo sistemų modeliavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120620_111437-00453.

Повний текст джерела
Анотація:
Pirmoje darbo dalyje yra nagrinėjamos organizacijų ekonominės problemos ir ieškoma kaip tos problemos gali būti pašalintos. Išanalizavus kokios gali būti problemos pradėta analizuoti intelektualios verslo sistemos, kaip vienas geriausių sprendimo būdų. Darbe taip pat nagrinėjama intelektualių sistemų apibrėţimas, paaiškinama architektūra ir išskiriama kuo BIS skiriasi nuo kitų informacinių valdymo sistemų. Antroje dalyje yra nagrinėjamos pagrindinės problemos susijusios su intelektualiomis verslo sistemomis ir jų diegimu organizacijose. Analizuojama šiuo metu labiausiai paplitusios technologijos OLAP ir QlikView, kurios naudoja atmintį duomenų krovimui. Analizuojami jų privalumai ir trūkumai taip pat skirtumai. Trečioje dalyje analizuojama intelektualių verslo sistemų projektavimo metodika. Keliami klausimai ką reikia daryti, kad gautume sistemą atitinkančią organizacijos lūkesčius. Analizuojama projektavimo struktūra naudojant UML diagramas, taip pat elementų svarba pačioje sistemoje. Ketvirtoje dalyje yra projektuojami intelektualių verslo sistemų modeliai, kurie yra tik pavyzdiniai, kurie parodo kokie privalumai ir kaip veikia intelektualios verslo sistemos vykdant pirkimo funkcijas panaudojant marketinginius principus.
In the first part of master thesis analyzing the main problems, which are in organization management. The main purpose of this part is to find a solution how can resolve this problems. Business intelligence systems are the best solution. In this part analyzing business intelligence systems definition, architecture and also analyze what is different in business intelligence systems comparing with other informatics management systems. The second part of work analyzes business intelligence problems related with creation, developing and using BIS. Analyze the most popular tools like OALP and QlikView. Compare the OLAP and QlikView. Analyze QlikView like data loading from memory. Explain the advantages and disadvantages of QlikView and OLAP. The third part analyzes designing and planning of business intelligence systems. Analyze what is the main purpose to create good system which will be useful in organizations. In this part also analyze the UML modems for BIS. In the last part is presenting solutions for problems which was analyzed in the second part. The examples are created using UML diagrams.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fischer, Ulrike. "Forecasting in Database Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-133281.

Повний текст джерела
Анотація:
Time series forecasting is a fundamental prerequisite for decision-making processes and crucial in a number of domains such as production planning and energy load balancing. In the past, forecasting was often performed by statistical experts in dedicated software environments outside of current database systems. However, forecasts are increasingly required by non-expert users or have to be computed fully automatically without any human intervention. Furthermore, we can observe an ever increasing data volume and the need for accurate and timely forecasts over large multi-dimensional data sets. As most data subject to analysis is stored in database management systems, a rising trend addresses the integration of forecasting inside a DBMS. Yet, many existing approaches follow a black-box style and try to keep changes to the database system as minimal as possible. While such approaches are more general and easier to realize, they miss significant opportunities for improved performance and usability. In this thesis, we introduce a novel approach that seamlessly integrates time series forecasting into a traditional database management system. In contrast to flash-back queries that allow a view on the data in the past, we have developed a Flash-Forward Database System (F2DB) that provides a view on the data in the future. It supports a new query type - a forecast query - that enables forecasting of time series data and is automatically and transparently processed by the core engine of an existing DBMS. We discuss necessary extensions to the parser, optimizer, and executor of a traditional DBMS. We furthermore introduce various optimization techniques for three different types of forecast queries: ad-hoc queries, recurring queries, and continuous queries. First, we ease the expensive model creation step of ad-hoc forecast queries by reducing the amount of processed data with traditional sampling techniques. Second, we decrease the runtime of recurring forecast queries by materializing models in a specialized index structure. However, a large number of time series as well as high model creation and maintenance costs require a careful selection of such models. Therefore, we propose a model configuration advisor that determines a set of forecast models for a given query workload and multi-dimensional data set. Finally, we extend forecast queries with continuous aspects allowing an application to register a query once at our system. As new time series values arrive, we send notifications to the application based on predefined time and accuracy constraints. All of our optimization approaches intend to increase the efficiency of forecast queries while ensuring high forecast accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jernberg, Robert, and Tobias Hultgren. "Flexible Data Extraction for Analysis using Multidimensional Databases and OLAP Cubes." Thesis, KTH, Data- och elektroteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123393.

Повний текст джерела
Анотація:
Bright is a company that provides customer and employee satisfaction surveys, and uses this information to provide feedback to their customers. Data from the surveys are stored in a relational database and information is generated both by directly querying the database as well as doing analysis on extracted data. As the amount of data grows, generating this information takes increasingly more time. Extracting the data requires significant manual work and is in practice avoided. As this is not an uncommon issue, there is a substantial theoretical framework around the area. The aim of this degree project is to explore the different methods for achieving flexible and efficient data analysis on large amounts of data. This was implemented using a multidimensional database designed for analysis as well as an OnLine Analytical Processing (OLAP) cube built using Microsoft's SQL Server Analysis Services (SSAS). The cube was designed with the possibility to extract data on an individual level through PivotTables in Excel. The implemented prototype was analyzed, showing that the prototype consistently delivers correct results severalfold as efficient as the current solution as well as making new types of analysis possible and convenient. It is concluded that the use of an OLAP cube was a good choice for the issue at hand, and that the use of SSAS provided the necessary features for a functional prototype. Finally, recommendations on possible further developments were discussed.
Bright är ett företag som tillhandahåller undersökningar för kund- och medarbetarnöjdhet, och använder den informationen för att ge återkoppling till sina kunder. Data från undersökningarna sparas i en relationsdatabas och information genereras både genom att direkt fråga databasen såväl som att göra manuell analys på extraherad data. När mängden data ökar så ökar även tiden som krävs för att generera informationen. För att extrahera data krävs en betydande mängd manuellt arbete och i praktiken undviks det. Då detta inte är ett ovanligt problem finns det ett gediget teoretiskt ramverk kring området. Målet med detta examensarbete är att utforska de olika metoderna för att uppnå flexibel och effektiv dataanalys på stora mängder data. Det implementerades genom att använda en multidimensionell databas designad för analys samt en OnLine Analytical Processing (OLAP)-kub byggd med Microsoft SQL Server Analysis Services (SSAS). Kuben designades med möjligheten att extrahera data på en individuell nivå med PivotTables i Excel. Den implementerade prototypen analyserades vilket visade att prototypen konsekvent levererar korrekta resultat flerfaldigt så effektivt som den nuvarande lösningen såväl som att göra nya typer av analys möjliga och lättanvända. Slutsatsen dras att användandet av en OLAP-kub var ett bra val för det aktuella problemet, samt att valet att använda SSAS tillhandahöll de nödvändiga funktionaliteterna för en funktionell prototyp. Slutligen diskuterades rekommendationer av möjliga framtida utvecklingar.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bell, Daniel M. "An evaluative case report of the group decision manager : a look at the communication and coordination issues facing online group facilitation /." free to MU campus, to others for purchase, 1998. http://wwwlib.umi.com/cr/mo/fullcit?p9901215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jäcksch, Bernhard [Verfasser]. "A Plan For OLAP: Optimization Of Financial Planning Queries In Data Warehouse Systems / Bernhard Jäcksch." München : Verlag Dr. Hut, 2011. http://d-nb.info/1017353700/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Funke, Florian Andreas [Verfasser], Alfons [Akademischer Betreuer] Kemper, Thomas [Akademischer Betreuer] Neumann, and Stefan [Akademischer Betreuer] Manegold. "Adaptive Physical Optimization in Hybrid OLTP & OLAP Main-Memory Database Systems / Florian Andreas Funke. Gutachter: Thomas Neumann ; Alfons Kemper ; Stefan Manegold. Betreuer: Alfons Kemper." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1076124976/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Feng, Haitang. "Data management in forecasting systems : optimization and maintenance." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00997235.

Повний текст джерела
Анотація:
Forecasting systems are usually based on data warehouses for data strorage, and OLAP tools for historical and predictive data visualization. Aggregated predictive data could be modified. Hence, the research issue can be described as the propagation of an aggregate-based modification in hirarchies and dimensions in a data warehouse enironment. Ther exists a great number of research works on related view maintenance problems. However, to our knowledge, the impact of interactive aggregate modifications on raw data was not investigated. This CIFRE thesis is supported by ANRT and the company Anticipeo. The application of Anticipeo is a sales forecasting system that predicts future sales in order to draw appropriate business strategy in advance. By the beginning of the thesis, the customers of Anticipeo were satisfied the precision of the prediction results, but not with the response time. The work of this thesis can be generalized into two parts. The first part consists in au audit on the existing application. We proposed a methodology relying on different technical solutions. It concerns the propagation of an aggregate-based modification in a data warehouse. the second part of our work consists in the proposition of a newx allgorithms (PAM - Propagation of Aggregated-baseed Modification) with an extended version (PAM II) to efficiently propagate in aggregate-based modification. The algorithms identify and update the exact sets of source data anf other aggregated impacted by the aggregated modification. The optimized PAM II version archieves better performance compared to PAM when the use of additional semantics (e.g. dependencies) is possible. The experiments on real data of Anticipeo proved that the PAM algorithm and its extension bring better perfiormance when a backward propagation.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Westerlund, Elisabeth, and Hanna Persson. "Implementation of Business Intelligence Systems : A study of possibilities and difficulties in small IT-enterprises." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-255915.

Повний текст джерела
Анотація:
This thesis is written at the department of Business Studies at Uppsala University. The study addresses the differences in possibilities and difficulties of implementing business intelligence (BI)-systems among small IT-enterprises. BI-systems support enterprises in decision-making. To answer the aim of this thesis, theories regarding organizational factors determining a successful implementation of a BI-system were used. Theories regarding components of BI- systems, data warehouse (DW) and online analytical processing (OLAP) were also used. These components enable the decision-support provided by a BI-system. A qualitative study was performed, at four different IT-enterprises, to gather the empirical material. Interviews were performed with CEOs and additional employees at the enterprises. After the empirical material was gathered an analysis was performed to draw conclusion regarding the research topic. The study has concluded that there are differences in possibilities and difficulties of implementing BI-systems among small IT-enterprises. A difference among the enterprises is the perceived ability to finance an implementation. Another difference is in the managerial- and organizational support of an implementation, but also in the business need of using a BI- system in decision-making. There are also differences in how the enterprises use a DW. Not all enterprises benefits from the ability of a DW to manage complex and large amounts of data, neither from the advanced analysis performed by OLAP. The enterprises thus need to examine further if the use of a BI-system is beneficial and would be used successfully in their company.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Madron, Lukáš. "Datové sklady a OLAP v prostředí MS SQL Serveru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235916.

Повний текст джерела
Анотація:
This paper deals with data warehouses and OLAP. These technologies are defined and described here. Then an introduction of the architecture of product MS SQL Server and its tools for work with data warehouses and OLAP folow. The knowledge gained is used for creation of sample application.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Meira, Katia Milena Gonçalves. "Utilização da tecnologia Data Warehousing e da ferramenta OLAP para apoiar a captação de doadores de sangue: estudo de caso no Hemonúcleo Regional de Jáu." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/18/18140/tde-01082017-115123/.

Повний текст джерела
Анотація:
As pesquisas de apoio a decisão na área de saúde, muitas vezes enfocam o diagnóstico e não o caráter gerencial da instituição. Portanto, as unidades de saúde com todas as suas particularidades, necessitam de informações confiáveis e precisas para auxiliá-las na tomada de decisão. Os hemonúcleos convivem com uma luta constante na captação de doadores de sangue para que possam garantir hemocomponentes em quantidade necessária e qualidade para a sua região de abrangência e, informações que possam auxiliá-los na manutenção desses estoques são imprescindíveis. A tecnologia Data Warehousing pode trazer muitos benefícios nesse sentido, por possibilitar o armazenamento de dados históricos que relacionados podem demonstrar tendências e identificar relacionamentos muitas vezes desconhecidos, além de utilizar ferramentas de fácil interação com o usuário. Dessa forma, essa pesquisa tem como objetivo desenvolver uma ferramenta de apoio à decisão que extraia dados do banco de dados transacional atual do Hemonúcleo Regional de Jaú e consista esses dados de forma que as informações possam ser acessadas de maneira simples e rápida pelo usuário final.
Many times, researches sorrounding the health area focus on the diagnostic instead of the managerial character of the institution. Because of this, the health units with their details need reliable and precise information to help them in their decision making. The blood banks live a constant battle to capture blood donors to guarantee a good quantity and quality of blood components for their region and they need information that can help them maintain storage of the blood. Data Warehouse technology brings a lot of benefits because they allow the storage of historic data that, when related, can present tendences and identify unknown relationships. Besides, Data Warehouse technology frequently uses user-friendly interface tools. Therefore, this research has a aim to develop a decision supporting tool that extract data from the current transaction database of Regional Blood Bank of Jau and that check these data so that the information may be easily and quickly accessed by the final user(s).
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kamath, Akash S. "An efficient algorithm for caching online analytical processing objects in a distributed environment." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174678903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Weherage, Pradeep Peiris. "BigDataCube: Distributed Multidimensional Data Cube Over Apache Spark : An OLAP framework that brings Multidimensional Data Analysis to modern Distributed Storage Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215696.

Повний текст джерела
Анотація:
Multidimensional Data Analysis is an important subdivision of Data Analytic paradigm. Data Cube provides the base abstraction for Multidimensional Data Analysis and helps in discovering useful insights of a dataset. On-Line Analytical Processing (OLAP) enhanced it to the next level supporting online responses to analytical queries with the underlying technique that precomputes (materializes) the data cubes. Data Cube Materialization is significant for OLAP, but it is an expensive task in term of data processing and storage. Most of the early decision support system benefits the value of multidimensional data analysis with a standard data architecture that extract, transform and load data from multiple data sources into a centralized database called Data Warehouse, on which OLAP engines provides the data cube abstraction. But this architecture and traditional OLAP engines do not hold with modern intensive datasets. Today, we have distributed data storage systems that keep data on a cluster of computer nodes, in which distributed data processing engines like MapReduce, Spark, Storm, etc. provide more ad-hoc style data analytical capabilities. Yet, there is no proper distributed system approach available for multidimensional data analysis, nor any distributed OLAP engine is available that follows distributed data cube materialization. It is essential to have a proper Distributed Data Cube Materialization mechanism to support multidimensional data analysis over the present distributed storage systems. Various research work available today which considered MapReduce for data cube materialization. Also, Apache Spark recently enabled CUBE operator as part of their DataFrame API. The thesis raises the problem statement, the best-distributed system approach for Data Cube Materialization, MapReduce or Spark? and contributes with experiments that compare the two distributed systems in materializing data cubes over the number of records, dimensions and cluster size. The results confirm Spark is more scalable and efficient in data cube materialization than MapReduce. The thesis further contributed with a novel framework, BigDataCube, which uses Spark DataFrames underneath for materializing data cubes and fulfills the need of multidimensional data analysis for modern distributed storage systems.
Multidimensional Data Analysis är en viktig del av Data Analytic paradigm. Data Cube tillhandahåller den grundläggade abstraktionen för Multidimensional Data Analysis och hjälper till att hitta användningsbara observationer av ett dataset. OnLine Analytical Processing (OLAP) lyfter det till nästa nivå och stödjer resultat från analytiska frågor i realtid med en underliggande teknik som materliserar Data Cubes. Data Cube Materialization är signifikant för OLAP, men är en kostsam uppgift vad gäller processa och lagra datat.De flesta av tidiga beslutssystem uppfyller Multidimensional Data Analysis med en standarddataarkitektur som extraherar, transformerar och läser data från flera datakällor in I en central databas, s.k. Data Warehouse, som exekveras av OLAP och tillhandahåller en Data Cube-abstraktion. Men denna arkitektur och tradionella OLAP-motorer klarar inte att hantera moderna högbelastade datasets. Idag har vi system med distribuerad datalagring, som har data på ett kluster av datornoder, med distribuerade dataprocesser, så som MapReduce, Spark, Storm etc. Dessa tillåter en mer ad-hoc dataanalysfunktionalitet. Än så länge så finns det ingen korrekt angreppsätt tillgänlig för Multidimensional Data Analysis eller någon distribuerad OLAP-motor som följer Distributed Data Cube Materialization.Det är viktigt att ha en korrekt Distributed Data Cube Materializationmekanism för att stödja Multidimensional Data Analysis för dagens distribuerade lagringssystem. Det finns många forskningarar idag som tittar på MapReduce för Data Cube Materialization. Nyligen har även Apache Spark tillgänglitgjort CUBE-operationer som en del av deras DataFrame API. Detta examensarbete tar upp frågeställningen, vilket som är det bästa angrepssättet för distribuerade system för Data Cube Materialization, MapReduce eller Spark. Arbetet bidrar dessutom med experiment som jämför de två distribuerade systemen i materialiserande datakubar över antalet poster, dimensioner och klusterstorlek. Examensarbetet bidrar även med ett mindre ramverk BigDataCube, som använder Spark DataFramesi bakgrunden för Data Cube Materialization och uppfyller behovet av Multidimensional Data Analysis av distribuerade lagringssystem.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Tocci, Gabriel. "A Comparison of Leading Database Storage Engines in Support of Online Analytical Processing in an Open Source Environment." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etd/1111.

Повний текст джерела
Анотація:
Online Analytical Processing (OLAP) has become the de facto data analysis technology used in modern decision support systems. It has experienced tremendous growth, and is among the top priorities for enterprises. Open source systems have become an effective alternative to proprietary systems in terms of cost and function. The purpose of the study was to investigate the performance of two leading database storage engines in an open source OLAP environment. Despite recent upgrades in performance features for the InnoDB database engine, the MyISAM database engine is shown to outperform the InnoDB database engine under a standard benchmark. This result was demonstrated in tests that included concurrent user sessions as well as asynchronous user sessions using data sets ranging from 6GB to 12GB. Although MyISAM outperformed InnoDB in all test performed, InnoDB provides ACID compliant transaction technologies are beneficial in a hybrid OLAP/OLTP system.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Filus, Michal. "Podpora rozhodování v CRM systému." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-235527.

Повний текст джерела
Анотація:
The thesis deals with a decision making support in CRM systems. The goal of this thesis was to design a module for decision making support in the CRM system using Pentaho analysis tools. The theoretical part contains a description of the data warehousing and data mining with a focus on analytic operations and decision making support. It also contains brief description of CRM systems and possible application of decision making support in these systems. The practical part deals with the description of architecture of CRM system CRMminer and describes the decision making support module in this system.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Arres, Billel. "Optimisation des performances dans les entrepôts distribués avec Mapreduce : traitement des problèmes de partionnement et de distribution des données." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2012.

Повний текст джерела
Анотація:
Dans ce travail de thèse, nous abordons les problèmes liés au partitionnement et à la distribution des grands volumes d’entrepôts de données distribués avec Mapreduce. Dans un premier temps, nous abordons le problème de la distribution des données. Dans ce cas, nous proposons une stratégie d’optimisation du placement des données, basée sur le principe de la colocalisation. L’objectif est d’optimiser les traitements lors de l’exécution des requêtes d’analyse à travers la définition d’un schéma de distribution intentionnelle des données permettant de réduire la quantité des données transférées entre les noeuds lors des traitements, plus précisément lors phase de tri (shuffle). Nous proposons dans un second temps une nouvelle démarche pour améliorer les performances du framework Hadoop, qui est l’implémentation standard du paradigme Mapreduce. Celle-ci se base sur deux principales techniques d’optimisation. La première consiste en un pré-partitionnement vertical des données entreposées, réduisant ainsi le nombre de colonnes dans chaque fragment. Ce partitionnement sera complété par la suite par un autre partitionnement d’Hadoop, qui est horizontal, appliqué par défaut. L’objectif dans ce cas est d’améliorer l’accès aux données à travers la réduction de la taille des différents blocs de données. La seconde technique permet, en capturant les affinités entre les attributs d’une charge de requêtes et ceux de l’entrepôt, de définir un placement efficace de ces blocs de données à travers les noeuds qui composent le cluster. Notre troisième proposition traite le problème de l’impact du changement de la charge de requêtes sur la stratégie de distribution des données. Du moment que cette dernière dépend étroitement des affinités des attributs des requêtes et de l’entrepôt. Nous avons proposé, à cet effet, une approche dynamique qui permet de prendre en considération les nouvelles requêtes d’analyse qui parviennent au système. Pour pouvoir intégrer l’aspect de "dynamicité", nous avons utilisé un système multi-agents (SMA) pour la gestion automatique et autonome des données entreposées, et cela, à travers la redéfinition des nouveaux schémas de distribution et de la redistribution des blocs de données. Enfin, pour valider nos contributions nous avons conduit un ensemble d’expérimentations pour évaluer nos différentes approches proposées dans ce manuscrit. Nous étudions l’impact du partitionnement et la distribution intentionnelle sur le chargement des données, l’exécution des requêtes d’analyses, la construction de cubes OLAP, ainsi que l’équilibrage de la charge (Load Balacing). Nous avons également défini un modèle de coût qui nous a permis d’évaluer et de valider la stratégie de partitionnement proposée dans ce travail
In this manuscript, we addressed the problems of data partitioning and distribution for large scale data warehouses distributed with MapReduce. First, we address the problem of data distribution. In this case, we propose a strategy to optimize data placement on distributed systems, based on the collocation principle. The objective is to optimize queries performances through the definition of an intentional data distribution schema of data to reduce the amount of data transferred between nodes during treatments, specifically during MapReduce’s shuffling phase. Secondly, we propose a new approach to improve data partitioning and placement in distributed file systems, especially Hadoop-based systems, which is the standard implementation of the MapReduce paradigm. The aim is to overcome the default data partitioning and placement policies which does not take any relational data characteristics into account. Our proposal proceeds according to two steps. Based on queries workload, it defines an efficient partitioning schema. After that, the system defines a data distribution schema that meets the best user’s needs, and this, by collocating data blocks on the same or closest nodes. The objective in this case is to optimize queries execution and parallel processing performances, by improving data access. Our third proposal addresses the problem of the workload dynamicity, since users analytical needs evolve through time. In this case, we propose the use of multi-agents systems (MAS) as an extension of our data partitioning and placement approach. Through autonomy and self-control that characterize MAS, we developed a platform that defines automatically new distribution schemas, as new queries appends to the system, and apply a data rebalancing according to this new schema. This allows offloading the system administrator of the burden of managing load balance, besides improving queries performances by adopting careful data partitioning and placement policies. Finally, to validate our contributions we conduct a set of experiments to evaluate our different approaches proposed in this manuscript. We study the impact of an intentional data partitioning and distribution on data warehouse loading phase, the execution of analytical queries, OLAP cubes construction, as well as load balancing. We also defined a cost model that allowed us to evaluate and validate the partitioning strategy proposed in this work
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Oketunji, Temitope, and Olalekan Omodara. "Design of Data Warehouse and Business Intelligence System : A case study of Retail Industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3738.

Повний текст джерела
Анотація:
Business Intelligence (BI) concept has continued to play a vital role in its ability for managers to make quality business decision to resolve the business needs of the organization. BI applications comes handy which allows managers to query, comprehend, and evaluate existing data within their organizations in order to obtain functional knowledge which then assist them in making improved and informed decisions. Data warehouse (DW) is pivotal and central to BI applications in that it integrates several diverse data sources, mainly structured transactional databases. However, current researches in the area of BI suggest that, data is no longer always presented in only to structured databases or format, but they also can be pulled from unstructured sources to make more power the managers’ analysis. Consequently, the ability to manage this existing information is critical for the success of the decision making process. The operational data needs of an organization are addressed by the online transaction processing (OLTP) systems which is important to the day-to-day running of its business. Nevertheless, they are not perfectly suitable for sustaining decision-support queries or business questions that managers normally needs to address. Such questions involve analytics including aggregation, drilldown, and slicing/dicing of data, which are best supported by online analytical processing (OLAP) systems. Data warehouses support OLAP applications by storing and maintaining data in multidimensional format. Data in an OLAP warehouse is extracted and loaded from multiple OLTP data sources (including DB2, Oracle, SQL Server and flat files) using Extract, Transfer, and Load (ETL) tools. This thesis seeks to develop DW and BI system to support the decision makers and business strategist at Crystal Entertainment in making better decision using historical structured or unstructured data.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Baker, Elizabeth White. "The Impact of Relational Model Bases on Organizational Decision Making: Cases in E-Commerce and Ecological Economics." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1399.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fortulan, Marcos Roberto. "O uso de business intelligence para gerar indicadores de desempenho no chão-de-fábrica: uma proposta de aplicação em uma empresa de manufatura." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18145/tde-11062006-185813/.

Повний текст джерела
Анотація:
A evolução pela qual passou o chão-de-fábrica no último século transformou-o numa área estratégica para as empresas, por meio da qual é possível atender e satisfazer as necessidades dos seus clientes. Esse novo chão-de-fábrica gera hoje uma grande quantidade de dados nos controles do seu processo produtivo, os quais, em muitos casos, após seu uso imediato ou de curto prazo, acabam descartados ou armazenados inadequadamente, impossibilitando ou dificultando seu acesso. Esses dados, no entanto, podem vir a ter uma importante utilidade como matéria-prima para a geração de informações úteis à gestão do negócio. Aliado à necessidade que as empresas hoje têm de possuir um adequado sistema de medição de desempenho, é possível obter, a partir dos dados históricos do chão-de-fábrica, um bom conjunto de indicadores de desempenho para a área. Para isso, esses dados precisam ser modelados em sistemas especialmente projetados para esta função. Esses sistemas vêm sendo tratados como sistemas de apoio à decisão (SAD) ou Business Intelligence (BI). Como solução para os problemas acima, foi feita então uma revisão sobre os temas: sistemas de informação, ERP, sistemas de medição de desempenho, qualidade da informação, SAD/BI, bem como uma revisão sobre os trabalhos científicos relacionados ao tema da tese. Uma vez tendo sido esses conceitos consolidados, partiu-se para o desenvolvimento de um modelo dimensional de BI que se utilizou das ferramentas de Data Warehouse, On Line Analytical Processing (OLAP) e Data Mining. O software utilizado foi o Analysis Services, pertencente ao banco de dados Microsoft SQL Server 2000. Em seguida, o modelo foi testado com dados reais de uma empresa do ramo metal-mecânico, tratada aqui como “empresa A”. Por meio do modelo e dos dados reais, uma série de análises foram realizadas com o intuito de mostrar a contribuição, capacidade, flexibilidade e facilidade de uso do modelo, atingindo o objetivo proposto
The shop floor evolution in the last century has transformed it in a strategic area for the companies, through which is possible to reach and to satisfy the customers needs. This new shop floor generates today a great amount of data by the productive process controls, the ones which, in many cases, after its immediate use or in a short period, are discarded or stored inadequately, disabling or making impossible its access. These data, however, can come to have an important use as raw material for the production of useful information to the business administration. Together to the need that the companies have today to possess an appropriated performance measure system, it is possible to obtain, from the historical shop floor data, a good performance indicators group for the area. For that, these data must be modeled by systems specifically designed for this purpose. These systems have been treated as decision support systems (DSS) or Business Intelligence (BI). As solution for the problems above, it was made a review over the following themes: information systems, ERP, performance measure system, information quality, DSS/BI, as well as a review about the scientific works related to this thesis theme. Once consolidated these concepts, started the development of a BI dimensional model, that used Data Warehouse tools, On Line Analytical Processing (OLAP) and Data Mining. It was used the Analysis Services software, belonged to the Microsoft SQL Server 2000 database. In the following, the model was tested with real data from a metal-mechanic company branch, called here as “company A”. Through the model and of the real data, a series of analyses was accomplished with the intention of showing the model contribution, capacity, flexibility and use easiness, reaching the proposed objective
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bouadi, Tassadit. "Analyse multidimensionnelle interactive de résultats de simulation : aide à la décision dans le domaine de l'agroécologie." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00933375.

Повний текст джерела
Анотація:
Dans cette thèse, nous nous sommes intéressés à l'analyse des données de simulation issues du modèle agro-hydrologique TNT. Les objectifs consistaient à élaborer des méthodes d'analyse des résultats de simulation qui replacent l'utilisateur au coeur du processus décisionnel, et qui permettent d'analyser et d'interpréter de gros volumes de données de manière efficace. La démarche développée consiste à utiliser des méthodes d'analyse multidimensionnelle interactive. Tout d'abord, nous avons proposé une méthode d'archivage des résultats de simulation dans une base de données décisionnelle (i.e. entrepôt de données), adaptée au caractère spatio-temporel des données de simulation produites. Ensuite, nous avons suggéré d'analyser ces données de simulations avec des méthodes d'analyse en ligne (OLAP) afin de fournir aux acteurs des informations stratégiques pour améliorer le processus d'aide à la prise de décision. Enfin, nous avons proposé deux méthodes d'extraction de skyline dans le contexte des entrepôts de données afin de permettre aux acteurs de formuler de nouvelles questions en combinant des critères environnementaux contradictoires, et de trouver les solutions compromis associées à leurs attentes, puis d'exploiter les préférences des acteurs pour détecter et faire ressortir les données susceptibles de les intéresser. La première méthode EC2Sky, permet un calcul incrémental et efficace des skyline en présence de préférences utilisateurs dynamiques, et ce malgré de gros volumes de données. La deuxième méthode HSky, étend la recherche des points skyline aux dimensions hiérarchiques. Elle permet aux utilisateurs de naviguer le long des axes des dimensions hiérarchiques (i.e. spécialisation / généralisation) tout en assurant un calcul en ligne des points skyline correspondants. Ces contributions ont été motivées et expérimentées par l'application de gestion des pratiques agricoles pour l'amélioration de la qualité des eaux des bassins versants agricoles, et nous avons proposé un couplage entre le modèle d'entrepôt de données agro-hydrologiques construit et les méthodes d'extraction de skyline proposées.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Petroski, Luiz Pedro. "Uma arquitetura para integração de ambientes data warehouse, espacial e agricultura de precisão." UNIVERSIDADE ESTADUAL DE PONTA GROSSA, 2017. http://tede2.uepg.br/jspui/handle/prefix/143.

Повний текст джерела
Анотація:
Made available in DSpace on 2017-07-21T14:19:30Z (GMT). No. of bitstreams: 1 Luiz Pedro Petroski.pdf: 3583708 bytes, checksum: cb0543041adedd80936d37a0c95d78ba (MD5) Previous issue date: 2017-03-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The aim of this work is to present a proposal of integration between Precision Agriculture, DataWarehouse / OLAP and GIS. The integration should use extensible and open components, agricultural modeling for decision support, geographical data support, communication interface between components, extension of existing GIS and Data Warehouse solutions. As a result of the integration, an open and extensible architecture was defined, with a spatial agricultural data warehouse modeling. In this way the technologies and tools are open and allow the implementation and extension of its functionalities to adapt to the agricultural scenario of decision. In order to perform the integration, the data were obtained from a farm in the city of Piraí do Sul/PR, which uses proprietary software for data management. Data was exported to the SHAPEFILE format, and through the process performed by the ETL tool, was extracted, transformed and loaded into the analytical database. Also as a source of political boundaries data of rural regions of Brazil, data from the IBGE were used. The database was modeled and implemented by PostgreSQL DBMS with the extension PostiGIS to support spatial data. To provide the OLAP query service, was used the Geomondrian server. The application was extended from the Geonode project, where it was implemented Analytic functionalities, and the interface between the application and the OLAP was performed by the Mandoline API and the OLAP4J library. And finally the interface was implemented through javascript libraries for creating charts, tables and maps. As principal result, an architecture was obtained for Data Warehouse integration, OLAP operations, agricultural and spatial data, as well as ETL process definition and the user interface.
O objetivo desta dissertação é apresentar uma proposta de integração entre agricultura de precisão, Data Warehouse/OLAP e SIG. A integração deve utilizar componentes abertos e extensíveis, modelagem agrícola para suporte a decisão, suporte a dados geográficos, interface de comunicação entre os componentes e a extensão de soluções existentes de SIG e Data Warehouse. Como resultado da integração foi definido uma arquitetura aberta e extensível, integrada, com uma modelagem de Data Warehouse agrícola espacial, que permite o suporte a tomada de decisão para o planejamento e gestão do manejo das práticas da agricultura de precisão. Desta forma as tecnologias e ferramentas utilizadas são abertas e permitem a implementação e extensão de suas funcionalidades para adequar ao cenário agrícola de tomada de decisão. Para realizar a integração foi utilizado os dados oriundos de uma fazenda localizada em Piraí do Sul/PR, a qual utiliza um software proprietário para o gerenciamento de dados. Os dados foram exportados para o formato SHAPEFILE, e através do processo realizado pela ferramenta de ETL, foram extraídos, transformados e carregados para a base de dados analítica. Também como fonte de dados sobre as fronteiras políticas das regiões rurais do Brasil, foi utilizado dados do IBGE. A base de dados analítica foi modelada e implementada em um SGBD PostgreSQL com a extensão PostiGIS para suportar os dados geográficos. Para prover o serviço de consultas OLAP, foi utilizado o servidor Geomondrian. A aplicação foi estendida do projeto Geonode, onde foi implementado as funcionalidades analíticas, e a interface entre a aplicação e o servidor OLAP, foi realizada pela API Mandoline e a biblioteca OLAP4J. E por fim a interface foi implementada por meio de bibliotecas javascript para a criação de gráficos, tabelas e mapas. Como principal resultado, obteve-se uma arquitetura para integração de datawarehouse, operações OLAP, dados espaciais e agricultura, bem como definição do processo de ETL e a interface com o usuário.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gouveia, Roberta Macêdo Marques. "Mineração de dados em data warehouse para sistema de abastecimento de água." Universidade Federal da Paraí­ba, 2009. http://tede.biblioteca.ufpb.br:8080/handle/tede/6054.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-05-14T12:36:29Z (GMT). No. of bitstreams: 1 parte1.pdf: 3149727 bytes, checksum: 501b860d063bed8174828ab1c9287d13 (MD5) Previous issue date: 2009-05-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This work propose to use technologies of databases with the aim of providing decision support for managers of sector of sanitation, given that the services of water supply for use of the population are a key indicator of quality of life. The fundamental idea is to collect operational data, reduce them to the scope of the problem, organize them into a repository of data, and finally apply the techniques OLAP and Data Mining algorithms to obtain results that give managers a better understanding of the behavior and profile of the company. To facilitate the application of the techniques of Data Mining is necessary that the data are stored properly. Accordingly, an alternative for increasing the efficiency in storage, management and operation of data to support the decision based on the development of Data Warehouse. This is source of strategic information of the business, creating a competitive differential for the company. In this context, was required to implement the repository of data, Data Warehouse, to store, integrate and carry out consultations on the multidimensional data from the company of water supply. Therefore, this Master's thesis aims to design a Data Warehouse relating to Departmental Business, also known as Data Mart; applied the technology on the OLAP multidimensional cubes of data, and run the Data Mining algorithms to the generation of a decision support system to minimize the apparent losses in the urban water supply system.
Esta dissertação se propõe a utilizar tecnologias de Banco de Dados com a finalidade de oferecer apoio à decisão para os gestores do setor de saneamento, haja vista que os serviços de abastecimento de água para uso da população se constituem em um dos principais indicadores da qualidade de vida da humanidade. A idéia fundamental consiste em coletar os dados operacionais, reduzi-los ao escopo de um problema, organizá-los em um repositório de dados, e finalmente aplicar as tecnologias OLAP e os algoritmos de Mineração de Dados, a fim de obter resultados que proporcionem aos gestores um melhor entendimento do comportamento e perfil da companhia. Para facilitar a aplicação de técnicas de Mineração de Dados é necessário que estes dados estejam armazenados apropriadamente. Neste sentido, uma das alternativas para o aumento da eficiência no armazenamento, gestão e operação dos dados para o suporte a decisão baseia-se no desenvolvimento do Data Warehouse. Este ambiente constitui fontes de informações estratégicas do negócio, gerando um diferencial competitivo para a companhia. Diante deste contexto, se fez necessário a implementação do repositório de dados, o Data Warehouse, para armazenar, integrar e realizar as consultas multidimensionais sobre os dados extraídos da companhia de abastecimento de água. Portanto, esta dissertação de mestrado tem como objetivos projetar um Data Warehouse Departamental referente ao setor comercial, também conhecido como Data Mart; aplicar as tecnologias OLAP sobre os cubos de dados multidimensionais; e executar algoritmos de Mineração de Dados visando a geração de um sistema de apoio à decisão para minimização das perdas aparentes no sistema de abastecimento urbano de água.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Widehammar, Per, and Robin Langell. "BIOMA : En modell för att bedöma en organisations BI-mognad ur ett multidimensionellt perspektiv." Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59872.

Повний текст джерела
Анотація:
Den ökade globaliseringen och senaste finanskrisen ställer höga krav på uppföljning och medvetenhet av ett företags prestation. Business Intelligence (BI) är ett område vars syfte är att förbättra en organisations prestation genom analys av historisk data. BI är ett komplext område som inte bara handlar om tekniska lösningar, även om det är en förutsättning. För närvarande investeras det mycket i olika BI-lösningar och företagen behöver veta vad resurserna bör läggas på. I dagsläget finns det ingen modell som bedömer ett företags arbete med Business Intelligence utifrån ett flertal dimensioner. Syftet med den här studien var att utveckla en mognadsmodell för Business Intelligence och sedan jämföra de undersökta företagen Axfood, Scania och Systembolagets mognad. För att uppnå studiens syfte avsåg vi att besvara följande frågeställningar, ”Hur skulle en modell för att bedöma ett företags mognad inom Business Intelligence kunna se ut?” samt ”Vilka förutsättningar påverkar ett företags mognad inom Business Intelligence”. Mognadsmodellen (BIOMA) kom att bestå av fyra hörnstenar som i sin tur delades in i en eller flera underkategorier. Varje delkategori ger poäng som sedan infogas i ett koordinatsystem där axlarna motsvarar hörnstenarna och poängen utgår från origo. Att mäta ett företags mognad inom BI är komplext, då ett antal aspekter såsom organisationsstruktur, användarmedverkan samt klyftan mellan IT-avdelning och verksamhet kan påverka. Den teoretiska modellen är empiriskt testad. Respondenterna på respektive företag har bedömt hur långt de kommit inom varje hörnsten samt ge synpunkter på modellens utformning. Modellen har sedan förädlats utifrån det empiriska materialet. Vi anser att BIOMA har ett stort värde då det saknas en modell som visuellt och relativt enkelt beskriver ett företags mognad inom Business Intelligence. Modellen kan användas i olika syften, såsom benchmarking mellan processer och företag, säljstöd för konsulter samt vid förstudie för att klargöra ett företags nuläge.
The increased globalization and the recent financial crisis have put high demands on the monitoring and awareness of an organization's performance. Business Intelligence (BI) is an area which aims to improve this performance through analysis of historical data. BI is a complex question for organization’s because it involves more than just technical solutions for maximum performance. Organizations are currently investing in different BI solutions and a list of priorities has to be made to ensure balanced resource allocation within a BI-implementation. To this day no single business intelligence model exists that can adequately measure a company’s work from several perspectives. The purpose of this study was to develop a maturity model for BI and use it in a case study of three different well-known Swedish companies; Axfood, Scania and Systembolaget, to measure their BI-maturity. To achieve the purpose of the study, three distinct research questions arose; "What would a model for measuring a company’s Business Intelligence maturity look like? How would this model be constructed? And finally “What conditions could potentially affect an organization’s maturity in Business Intelligence?". The Maturity Model BIOMA (Business Intelligence Organizational Maturity Analysis) is made up of four categories, which in turn are divided into one or more sub-categories. A subcategory consists of several statements. Each statement carries a certain number of points. When the points are combined, the summarized amount is inserted into a coordinate system. Within this, the axies correspond to the pillars and the score is based on the origo-point. Measuring a company's BI-maturity is a complex research question, where a number of aspects such as organizational structure, end-user involvement, and the gap between IT department and business can be of great importance. BIOMA was empirically tested in the case study. The responders in each company judged their company based on the statements in each subcategory. Following this they made suggestions on ways to change the model. By applying these suggestions to the original material, the model was then redeveloped to create a final version. The model can be used for various purposes, such as processes within organizations or in benchmarking. It can also be used by consultants in Sales support as a pilot study for clarifying a company’s present BI-maturity. In this absence of a model that could visually describe a company’s BI maturity multidimensionally, we believe that BIOMA has substantial and existing business potential.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Koylu, Caglar. "A Case Study In Weather Pattern Searching Using A Spatial Data Warehouse Model." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609573/index.pdf.

Повний текст джерела
Анотація:
Data warehousing and Online Analytical Processing (OLAP) technology has been used to access, visualize and analyze multidimensional, aggregated, and summarized data. Large part of data contains spatial components. Thus, these spatial components convey valuable information and must be included in exploration and analysis phases of a spatial decision support system (SDSS). On the other hand, Geographic Information Systems (GISs) provide a wide range of tools to analyze spatial phenomena and therefore must be included in the analysis phases of a decision support system (DSS). In this regard, this study aims to search for answers to the problem how to design a spatially enabled data warehouse architecture in order to support spatio-temporal data analysis and exploration of multidimensional data. Consequently, in this study, the concepts of OLAP and GISs are synthesized in an integrated fashion to maximize the benefits generated from the strengths of both systems by building a spatial data warehouse model. In this context, a multidimensional spatio-temporal data model is proposed as a result of this synthesis. This model addresses the integration problem of spatial, non-spatial and temporal data and facilitates spatial data exploration and analysis. The model is evaluated by implementing a case study in weather pattern searching.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Aligon, Julien. "Similarity-based recommendation of OLAP sessions." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4022/document.

Повний текст джерела
Анотація:
L’OLAP (On-Line Analytical Processing) est le paradigme principal pour accéder aux données multidimensionnelles dans les entrepôts de données. Pour obtenir une haute expressivité d’interrogation, malgré un petit effort de formulation de la requête, OLAP fournit un ensemble d’opérations (comme drill-down et slice-and-dice ) qui transforment une requête multidimensionnelle en une autre, de sorte que les requêtes OLAP sont normalement formulées sous la forme de séquences appelées Sessions OLAP. Lors d’une session OLAP l’utilisateur analyse les résultats d’une requête et, selon les données spécifiques qu’il voit, applique une seule opération afin de créer une nouvelle requête qui lui donnera une meilleure compréhension de l’information. Les séquences de requêtes qui en résultent sont fortement liées à l’utilisateur courant, le phénomène analysé, et les données. Alors qu’il est universellement reconnu que les outils OLAP ont un rôle clé dans l’exploration souple et efficace des cubes multidimensionnels dans les entrepôts de données, il est aussi communément admis que le nombre important d’agrégations et sélections possibles, qui peuvent être exploités sur des données, peut désorienter l’expérience utilisateur
OLAP (On-Line Analytical Processing) is the main paradigm for accessing multidimensional data in data warehouses. To obtain high querying expressiveness despite a small query formulation effort, OLAP provides a set of operations (such as drill-down and slice-and-dice) that transform one multidimensional query into another, so that OLAP queries are normally formulated in the form of sequences called OLAP sessions. During an OLAP session the user analyzes the results of a query and, depending on the specific data she sees, applies one operation to determine a new query that will give her a better understanding of information. The resulting sequences of queries are strongly related to the issuing user, to the analyzed phenomenon, and to the current data. While it is universally recognized that OLAP tools have a key role in supporting flexible and effective exploration of multidimensional cubes in data warehouses, it is also commonly agreed that the huge number of possible aggregations and selections that can be operated on data may make the user experience disorientating
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Totok, Andreas. "Modellierung von OLAP- und Data-Warehouse-Systemen /." Wiesbaden : Dt. Univ.-Verl. [u.a.], 2000. http://www.gbv.de/dms/ilmenau/toc/312031483.PDF.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Koukal, Bohuslav. "OLAP Recommender: Supporting Navigation in Data Cubes Using Association Rule Mining." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359132.

Повний текст джерела
Анотація:
Manual data exploration in data cubes and searching for potentially interesting and useful information starts to be time-consuming and ineffective from certain volume of the data. In my thesis, I designed, implemented and tested a system, automating the data cube exploration and offering potentially interesting views on OLAP data to the end user. The system is based on integration of two data analytics methods - OLAP analysis data visualisation and data mining, represented by GUHA association rules mining. Another contribution of my work is a research of possibilities how to solve differences between OLAP analysis and association rule mining. Implemented solutions of the differences include data discretization, dimensions commensurability, design of automatic data mining task algorithm based on the data structure and mapping definition between mined association rules and corresponding OLAP visualisation. The system was tested with real retail sales data and with EU structural funds data. The experiments proved that complementary usage of the association rule mining together with OLAP analysis identifies relationships in the data with higher success rate than the isolated use of both techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ponelis, S. R. (Shana Rachel). "Data marts as management information delivery mechanisms: utilisation in manufacturing organisations with third party distribution." Thesis, University of Pretoria, 2002. http://hdl.handle.net/2263/27061.

Повний текст джерела
Анотація:
Customer knowledge plays a vital part in organisations today, particularly in sales and marketing processes, where customers can either be channel partners or final consumers. Managing customer data and/or information across business units, departments, and functions is vital. Frequently, channel partners gather and capture data about downstream customers and consumers that organisations further upstream in the channel require to be incorporated into their information systems in order to allow for management information delivery to their users. In this study, the focus is placed on manufacturing organisations using third party distribution since the flow of information between channel partner organisations in a supply chain (in contrast to the flow of products) provides an important link between organisations and increasingly represents a source of competitive advantage in the marketplace. The purpose of this study is to determine whether there is a significant difference in the use of sales and marketing data marts as management information delivery mechanisms in manufacturing organisations in different industries, particularly the pharmaceuticals and branded consumer products. The case studies presented in this dissertation indicates that there are significant differences between the use of sales and marketing data marts in different manufacturing industries, which can be ascribed to the industry, both directly and indirectly.
Thesis (MIS(Information Science))--University of Pretoria, 2002.
Information Science
MIS
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chudán, David. "Association rule mining as a support for OLAP." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-201130.

Повний текст джерела
Анотація:
The aim of this work is to identify the possibilities of the complementary usage of two analytical methods of data analysis, OLAP analysis and data mining represented by GUHA association rule mining. The usage of these two methods in the context of proposed scenarios on one dataset presumes a synergistic effect, surpassing the knowledge acquired by these two methods independently. This is the main contribution of the work. Another contribution is the original use of GUHA association rules where the mining is performed on aggregated data. In their abilities, GUHA association rules outperform classic association rules referred to the literature. The experiments on real data demonstrate the finding of unusual trends in data that would be very difficult to acquire using standard methods of OLAP analysis, the time consuming manual browsing of an OLAP cube. On the other hand, the actual use of association rules loses a general overview of data. It is possible to declare that these two methods complement each other very well. The part of the solution is also usage of LMCL scripting language that automates selected parts of the data mining process. The proposed recommender system would shield the user from association rules, thereby enabling common analysts ignorant of the association rules to use their possibilities. The thesis combines quantitative and qualitative research. Quantitative research is represented by experiments on a real dataset, proposal of a recommender system and implementation of the selected parts of the association rules mining process by LISp-Miner Control Language. Qualitative research is represented by structured interviews with selected experts from the fields of data mining and business intelligence who confirm the meaningfulness of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Blackshaw, Bruce Philip. "Migration of legacy OLTP architectures to distributed systems." Thesis, Queensland University of Technology, 1997. https://eprints.qut.edu.au/36839/1/36839_Blackshaw_1997.pdf.

Повний текст джерела
Анотація:
Mincom, a successful Australian software company, markets an enterprise product known as the Mincom Information Management System, or MIMS. MIMS is an integrated suite of modules covering materials, maintenance, financials, and human resources management. MIMS is an on-line transaction processing (OLTP) system, meaning it has special requirements in the areas of pe,jormance and scalability. MIMS consists of approxiniately 16 000 000 lines of code, most of which is written in COBOL. Its basic architecture is 3-tier client/server, utilising a database layer, application logic layer, and a Graphical User Inte,face (GUI). While this architecture has proved successful, Mincom is looking to gradually evolve MIMS into a distributed architecture. COREA is the target distributed framework. The development of an enterprise distributed system is fraught with difficulties. Key technical problems are not yet solved, and Mincom cannot afford the risk and cost involved in rewriting MIMS completely. The only viable approach is to gradually evolve MIMS into the desired architecture using a hybrid system that allows clients to access existing and new functionality. This thesis addresses the design and development of distributed systems, and the evolution of existing legacy systems into this architecture. It details the current MIMS architecture, and explains some of its shortcomings. The desirable characteristics of a new system based on a distributed architecture such as COREA are outlined. A case is established for a gradual migration of the current system via a hybrid system rather than a complete rewrite. Two experimental systems designed to investigate the proposed new architecture are discussed. The conclusion reached from the first, known as Genesis, is that the maturity of CORBA for ente1prise development is not sufficient-12-18 months are estimated to be required for the appropriate level of maturity to be reached. The second system, EGEN, demonstrates how workflow can be integrated into a distributed system. An event-based workflow architecture is demonstrated, and it is explained how a workflow event server can be used to provide workflow services across a hybrid system. EGEN also demonstrates how a middleware gateway can be used to allow COREA clients access to the functionality of the existing MIMS system. Finally, a proposed migration strategy for moving MIMS to a distributed architecture based on COREA is outlined. While developed specifically for MIMS, this strategy is broadly applicable to the migration of any large 3-tier client/server system to a distributed architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Seng, Olaf [Verfasser]. "Suchbasierte Strukturverbesserung objektorientierter Systeme / von Olaf Seng." Karlsruhe : Univ.-Verl. Karlsruhe, 2008. http://d-nb.info/988527146/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Enge, Olaf [Verfasser]. "Analyse und Synthese elektromechanischer Systeme / Olaf Enge." Aachen : Shaker, 2005. http://d-nb.info/1186589272/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Hellman, Marcus. "Improving traveling habits using an OLAP cube : Development of a business intelligence system." Thesis, Umeå universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-128576.

Повний текст джерела
Анотація:
The aim of this thesis is to improve the traveling habits of clients using the SpaceTime system when arranging their travels. The improvement of traveling habits refers to lowering costs and emissions generated by the travels. To do this, a business intelligence system, including an OLAP cube, were created to provide the clients with feedback on how they travel. This to make it possible to see if they are improving and how much they have saved, both in money and emissions. Since these kind of systems often are quite complex, studies on best practices and how to keep such systems agile were performed to be able to provide a system of high quality. During this project, it was found that the pre-study and design phase were just as challenging as the creation of the designed components. The result of this project was a business intelligence system, including ETL, a Data warehouse, and an OLAP cube that will be used in the SpaceTime system as well as mock-ups presenting how data from the OLAP cube could be presented in the SpaceTime web-application in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Pistocchi, Filippo. "Implicit Roll-Up Over Graph-Based Data Integration System." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25357/.

Повний текст джерела
Анотація:
Nowadays, IoT and social networks are the main sources of big data, they generate a massive amount of assets and companies have to develop data-driven strategies to exploit the value of information that’s behind data. The shape of data sources is typically heterogeneous, since data can be generated from different sources distributed all around the world. The sparsity and the heterogeneous shape of data make much more difficult the process of data wrangling and knowledge discovering, and these are the reasons why data-driven companies must use data integration techniques to address this complexity. The DTIM research group at Universitat Politècnica de Catalunya (UPC) upon I have been working with is interested in such thematic and in 2015 they developed Graph-driven Federated Data Management (GFDM), that proposes in a very intuitive way a graph-based data integration architecture. What we would like to do in this project is to extend GFDM, to support automatic data aggregation following the OLAP data processing grounded on multidimensional modeling, as data warehouses do, but on top of graph data. This idea will be carried out by developing a framework able to perform OLAP-like queries over GFDM, mainly focusing on the well-known Roll-Up operation. In this thesis we have developed a method that given a query is able to align data coming from different data sources and sitting at different granularities level that participate in the same conceptual aggregation hierarchy. Our method is able to identify implicit aggregations that would allow to align data from different data sources and integrate them seeminglessly at the correct granularity level. After an accurate design and implementation phase we can finally consider our goal accomplished, developing with success the Implicit Roll-Up algorithm satisfying our requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Bragagni, Cristiano. "Progettazione di uno strumento di analisi dati per sistemi SCADA-ENERGY Management System." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7927/.

Повний текст джерела
Анотація:
L’obiettivo di questa tesi è approfondire le competenze sulle funzionalità sviluppate nei sistemi SCADA/EMS presenti sul mercato, così da conoscerne le potenzialità offerte: tutte le conoscenze acquisite servono a progettare uno strumento di analisi dati flessibile e interattivo, con il quale è possibile svolgere analisi non proponibili con le altre soluzioni analizzate. La progettazione dello strumento di analisi dei dati è orientata a definire un modello multidimensionale per la rappresentazione delle informazioni: il percorso di progettazione richiede di individuare le informazioni d’interesse per l’utente, così da poterle reintrodurre in fase di progettazione della nuova base dati. L’infrastruttura finale di questa nuova funzionalità si concretizza in un data warehouse: tutte le informazioni di analisi sono memorizzare su una base dati diversa da quella di On.Energy, evitando di correlare le prestazione dei due diversi sottosistemi. L’utilizzo di un data warehouse pone le basi per realizzare analisi su lunghi periodi temporali: tutte le tipologie di interrogazione dati comprendono un enorme quantità d’informazioni, esattamente in linea con le caratteristiche delle interrogazioni OLAP
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Rydzi, Daniel. "Metodika vývoje a nasazování Business Intelligence v malých a středních podnicích." Doctoral thesis, Vysoká škola ekonomická v Praze, 2005. http://www.nusl.cz/ntk/nusl-77060.

Повний текст джерела
Анотація:
Dissertation thesis deals with development and implementation of Business Intelligence (BI) solutions for Small and Medium Sized Enterprises (SME) in the Czech Republic. This thesis represents climax of author's up to now effort that has been put into completing a methodological model for development of this kind of applications for SMEs using self-owned skills and minimum of external resources and costs. This thesis can be divided into five major parts. First part that describes used technologies is divided into two chapters. First chapter describes contemporary state of Business Intelligence concept and it also contains original taxonomy of Business Intelligence solutions. Second chapter describes two Knowledge Discovery in Databases (KDD) techniques that were used for building those BI solutions that are introduced in case studies. Second part describes the area of Czech SMEs, which is an environment where the thesis was written and which it is meant to contribute to. This environment is represented by one chapter that defines the differences of SMEs against large corporations. Furthermore, there are author's reasons why he is personally focusing on this area explained. Third major part introduces the results of survey that was conducted among Czech SMEs with support of Department of Information Technologies of Faculty of Informatics and Statistics of University of Economics in Prague. This survey had three objectives. First one was to map the readiness of Czech SMEs for BI solutions development and deployment. Second was to determine major problems and consequent decisions of Czech SMEs that could be supported by BI solutions and the third objective was to determine top factors preventing SMEs from developing and deploying BI solutions. Fourth part of the thesis is also the core one. In two chapters there is the original Methodology for development and deployment of BI solutions by SMEs described as well as other methodologies that were studied. Original methodology is partly based on famous CRISP-DM methodology. Finally, last part describes particular company that has become a testing ground for author's theories and that supports his research. In further chapters it introduces case-studies of development and deployment of those BI solutions in this company, that were build using contemporary BI and KDD techniques with respect to original methodology. In that sense, these case-studies verified theoretical methodology in real use.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Stefanoli, Franklin Seoane. "Proposta de um modelo de sistema de apoio à decisão em vendas: uma aplicação." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/18/18140/tde-11052016-151518/.

Повний текст джерела
Анотація:
Este trabalho teve por objetivo o desenvolvimento de uma proposta de um modelo de sistema de apoio à decisão em vendas e sua aplicação. O levantamento sobre o perfil das vendas no mercado corporativo - de empresas-para-empresas, as técnicas de vendas, informações necessárias para a realização de uma venda eficiente, tal qual o controle das ações e resultados dos vendedores com a ajuda de relatórios, tudo isso aliado às tecnologias de data warehouse, data mart, OLAP foram essenciais na elaboração de uma proposta de modelo genérico e sua implantação. Esse modelo genérico foi aplicado levando-se em conta uma editora de listas e guias telefônicos hipotética, e foi construído buscando-se suprir os profissionais de vendas com informações que poderão melhorar a efetividade de suas vendas e dar-lhes maior conhecimento sobre seus produtos, clientes, usuários de listas e o mercado como um todo, além de suprir os gerentes de uma ferramenta rápida e confiável de auxílio à análise e coordenação dos esforços de vendas. A possibilidade de visualização rápida, confiável e personalizada das diversas informações permitidas por esse sistema, tal qual o êxito em responder às perguntas de pesquisas apresentadas no trabalho, comprova que essa aplicação poderá ser útil à empresa e em específico aos profissionais de vendas e gerentes tomadores de decisão.
The objective of this study was a development of a decision support system\'s application for sales. Relevant information about business markets and business buying behavior, principles of selling and evaluation models for sales representatives, all of this supported by data warehouse, data mart and OLAP technologies was important to create a generic DSS model as well your application. This generic model was created due to match some information needs of sales professionals from a hypothetic publishing company (yellow page). This system and the information generated by it would improve the effectiveness of sales professionals (sales reps), giving them more information and knowledge about your product, market, customers, users, and offering to sales managers as well decision makers a reliable, fast and interactive tool for sales effort\'s analysis and monitoring. A fast, interactive, reliable and customized visualization of information - characteristics of this system, as well your effectiveness of answering questions presented in the beginning of this study reinforce the utility and applicability of this model in order to satisfy sales professionals, managers and publishing companies information needs.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Vacula, Vladimír. "Využití statistických metod projektu R v systému pro podporu rozhodování." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2008. http://www.nusl.cz/ntk/nusl-221936.

Повний текст джерела
Анотація:
The aim of this thesis is to present possibility to integrate Decision Support System with specialized system for statistical computing and provides easier way to analyze economics indicators using sophisticated statistical methods. The R project is complex set of applications, designated for manipulation, computing and graphical presentation of data sets. It is mostly used for statistical analysis and graphical presentations. It allows users to create new methods with language similar to S as well as using the default methods provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Stryka, Lukáš. "Návrh využití nástrojů Business Intelligence pro potřeby malé firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2008. http://www.nusl.cz/ntk/nusl-221664.

Повний текст джерела
Анотація:
This Diploma Thesis deals with analysis of the current processes in small software company. On the basis of the weakness evaluation, the new extensions of the information system are designed. The first extension is new module for on-line sale and cashless on-line payment. The second one is integration of Business Intelligence tools to help streamline marketing strategies of this company.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ventzke, Kathrin [Verfasser], Olaf [Akademischer Betreuer] Jöhren, and Johannes [Gutachter] Klein. "Die zirkadiane Regulation des Orexin-Systems / Kathrin Ventzke ; Gutachter: Johannes Klein ; Akademischer Betreuer: Olaf Jöhren." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2019. http://d-nb.info/1209176297/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Jang, Eunseon [Verfasser], Olaf [Akademischer Betreuer] [Gutachter] Kolditz, Christoph [Gutachter] Schüth, and Seong-Taek [Gutachter] Yun. "Reactive transport simulation of contaminant fate and redox transformation in heterogeneous aquifer systems / Eunseon Jang ; Gutachter: Olaf Kolditz, Christoph Schüth, Seong-Taek Yun ; Betreuer: Olaf Kolditz." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://d-nb.info/1139977229/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Paulin, James R. "Performance evaluation of concurrent OLTP and DSS workloads in a single database system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ27065.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Paulin, James R. (James Robson) 1952 Carleton University Dissertation Computer Science. "Performance evaluation of concurrent OLTP and DSS workloads in a single database system." Ottawa.:, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Borchert, Christoph [Verfasser], Olaf [Akademischer Betreuer] Spinczyk, and Wolfgang [Gutachter] Schröder-Preikschat. "Aspect-oriented technology for dependable operating systems / Christoph Borchert ; Gutachter: Wolfgang Schröder-Preikschat ; Betreuer: Olaf Spinczyk." Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1133361919/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kuperstein, Janice M. "TIKKUN OLAM A FAITH-BASED APPROACH FOR ASSISTING OLDER ADULTS IN HEALTH SYSTEM NAVIGATION." Lexington, Ky. : [University of Kentucky Libraries], 2008. http://hdl.handle.net/10225/799.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--University of Kentucky, 2008.
Title from document title page (viewed on August 25, 2008). Document formatted into pages; contains: viii, 152 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 140-149).
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hähnke, Olaf [Verfasser]. "Auftreten cerebraler Ischämien unter dem linksventrikulären Assist-Device-System HeartMate II® / Olaf Hähnke." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2017. http://d-nb.info/1139255231/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Schlesinger, Lutz, Wolfgang Lehner, Wolfgang Hümmer, and Andreas Bauer. "Nutzung von Datenbankdiensten in Data-Warehouse-Anwendungen." De Gruyter Oldenbourg, 2003. https://tud.qucosa.de/id/qucosa%3A72851.

Повний текст джерела
Анотація:
Zentral für eine effiziente Analyse der in Data-Warehouse-Systemen gespeicherten Daten ist das Zusammenspiel zwischen Anwendung und Datenbanksystem. Der vorliegende Artikel klassifiziert und diskutiert unterschiedliche Wege, Data-Warehouse-Anwendungen mit dem Datenbanksystem zu koppeln, um komplexe OLAP-Szenarien zur Berechnung dem Datenbankdienst zu überlassen. Dabei werden vier unterschiedliche Kategorien, die Spracherweiterung (SQL), die anwendungsspezifische Sprachneuentwicklung (MDX), die Nutzung spezifischer Objektmodelle (JOLAP) und schließlich der Rückgriff auf XML-basierte WebServices (XCube) im einzelnen diskutiert und vergleichend gegenübergestellt.
The connection of the applications and the underlying database system is crucial for performing analyses efficiently within a data warehouse system. This paper classifies and discusses different methods to bring data warehouse applications logically close to the underlying database system so that the computation of complex OLAP scenarios may be performed within the database system and not outside at the application. In detail, four different categories ranging from language extension (SQL) over the design of a new query language (MDX) and using special object models (JOLAP) to the use of XML-based WebServices are discussed and compared in detail.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Marquardt, Justus. "Metadatendesign zur Integration von Online Analytical Processing in das Wissensmanagement /." Hamburg : Kovač, 2008. http://www.verlagdrkovac.de/978-3-8300-3598-5.htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

El, Malki Mohammed. "Modélisation NoSQL des entrepôts de données multidimensionnelles massives." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20139/document.

Повний текст джерела
Анотація:
Les systèmes d’aide à la décision occupent une place prépondérante au sein des entreprises et des grandes organisations, pour permettre des analyses dédiées à la prise de décisions. Avec l’avènement du big data, le volume des données d’analyses atteint des tailles critiques, défiant les approches classiques d’entreposage de données, dont les solutions actuelles reposent principalement sur des bases de données R-OLAP. Avec l’apparition des grandes plateformes Web telles que Google, Facebook, Twitter, Amazon… des solutions pour gérer les mégadonnées (Big Data) ont été développées et appelées « Not Only SQL ». Ces nouvelles approches constituent une voie intéressante pour la construction des entrepôts de données multidimensionnelles capables de supporter des grandes masses de données. La remise en cause de l’approche R-OLAP nécessite de revisiter les principes de la modélisation des entrepôts de données multidimensionnelles. Dans ce manuscrit, nous avons proposé des processus d’implantation des entrepôts de données multidimensionnelles avec les modèles NoSQL. Nous avons défini quatre processus pour chacun des deux modèles NoSQL orienté colonnes et orienté documents. De plus, le contexte NoSQL rend également plus complexe le calcul efficace de pré-agrégats qui sont habituellement mis en place dans le contexte ROLAP (treillis). Nous avons élargis nos processus d’implantations pour prendre en compte la construction du treillis dans les deux modèles retenus.Comme il est difficile de choisir une seule implantation NoSQL supportant efficacement tous les traitements applicables, nous avons proposé deux processus de traductions, le premier concerne des processus intra-modèles, c’est-à-dire des règles de passage d’une implantation à une autre implantation du même modèle logique NoSQL, tandis que le second processus définit les règles de transformation d’une implantation d’un modèle logique vers une autre implantation d’un autre modèle logique
Decision support systems occupy a large space in companies and large organizations in order to enable analyzes dedicated to decision making. With the advent of big data, the volume of analyzed data reaches critical sizes, challenging conventional approaches to data warehousing, for which current solutions are mainly based on R-OLAP databases. With the emergence of major Web platforms such as Google, Facebook, Twitter, Amazon...etc, many solutions to process big data are developed and called "Not Only SQL". These new approaches are an interesting attempt to build multidimensional data warehouse capable of handling large volumes of data. The questioning of the R-OLAP approach requires revisiting the principles of modeling multidimensional data warehouses.In this manuscript, we proposed implementation processes of multidimensional data warehouses with NoSQL models. We defined four processes for each model; an oriented NoSQL column model and an oriented documents model. Each of these processes fosters a specific treatment. Moreover, the NoSQL context adds complexity to the computation of effective pre-aggregates that are typically set up within the ROLAP context (lattice). We have enlarged our implementations processes to take into account the construction of the lattice in both detained models.As it is difficult to choose a single NoSQL implementation that supports effectively all the applicable treatments, we proposed two translation processes. While the first one concerns intra-models processes, i.e., pass rules from an implementation to another of the same NoSQL logic model, the second process defines the transformation rules of a logic model implementation to another implementation on another logic model
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії