Dissertations / Theses on the topic 'Warehouses Management Data processing'

To see the other types of publications on this topic, follow the link: Warehouses Management Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Warehouses Management Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Issa, Carla Mounir. "Data warehouse applications in modern day business." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2148.

Full text
Abstract:
Data warehousing provides organizations with strategic tools to achieve the competitive advantage that organazations are constantly seeking. The use of tools such as data mining, indexing and summaries enables management to retrieve information and perform thorough analysis, planning and forcasting to meet the changes in the market environment. in addition, The data warehouse is providing security measures that, if properly implemented and planned, are helping organizations ensure that their data quality and validity remain intact.
APA, Harvard, Vancouver, ISO, and other styles
2

Ponelis, S. R. (Shana Rachel). "Data marts as management information delivery mechanisms: utilisation in manufacturing organisations with third party distribution." Thesis, University of Pretoria, 2002. http://hdl.handle.net/2263/27061.

Full text
Abstract:
Customer knowledge plays a vital part in organisations today, particularly in sales and marketing processes, where customers can either be channel partners or final consumers. Managing customer data and/or information across business units, departments, and functions is vital. Frequently, channel partners gather and capture data about downstream customers and consumers that organisations further upstream in the channel require to be incorporated into their information systems in order to allow for management information delivery to their users. In this study, the focus is placed on manufacturing organisations using third party distribution since the flow of information between channel partner organisations in a supply chain (in contrast to the flow of products) provides an important link between organisations and increasingly represents a source of competitive advantage in the marketplace. The purpose of this study is to determine whether there is a significant difference in the use of sales and marketing data marts as management information delivery mechanisms in manufacturing organisations in different industries, particularly the pharmaceuticals and branded consumer products. The case studies presented in this dissertation indicates that there are significant differences between the use of sales and marketing data marts in different manufacturing industries, which can be ascribed to the industry, both directly and indirectly.
Thesis (MIS(Information Science))--University of Pretoria, 2002.
Information Science
MIS
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
3

Rosa, Luiz Henrique Leite. "Sistema de apoio à gestão de utilidades e energia: aplicação de conceitos de sistemas de informação e de apoio à tomada de decisão." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3143/tde-03082007-165825/.

Full text
Abstract:
Este trabalho trata da especificação, desenvolvimento e utilização do Sistema de Apoio à Gestão de Utilidades e Energia - SAGUE, um sistema concebido para auxiliar na análise de dados coletados de sistemas de utilidades como ar comprimido, vapor, sistemas de bombeamento, sistemas para condicionamento ambiental e outros, integrados com medições de energia e variáveis climáticas. O SAGUE foi desenvolvido segundo conceitos presentes em sistemas de apoio à decisão como Data Warehouse e OLAP - Online Analytical Processing - com o intuito de transformar os dados oriundos de medições em informações que orientem diretamente as ações de conservação e uso racional de energia. As principais características destes sistemas, que influenciaram na especificação e desenvolvimento do SAGUE, são tratadas neste trabalho. Além disso, este texto aborda a gestão energética e os sistemas de gerenciamento de energia visando apresentar o ambiente que motivou o desenvolvimento do SAGUE. Neste contexto, é apresentado o Sistema de Gerenciamento de Energia Elétrica - SISGEN, um sistema de informação para suporte à gestão de energia elétrica e de contratos de fornecimento, cujos dados coletados podem ser analisados através do SAGUE. A aplicação do SAGUE é tratada na forma de um estudo de caso no qual se analisa a correlação existente entre o consumo de energia elétrica da CUASO - Cidade Universitária Armando de Sales Oliveira, obtido através do SISGEN, e as medições de temperatura ambiente, fornecidas pelo IAG - Instituto de Astronomia, Geofísica e Ciências Atmosféricas da USP.
This work deals with specification, development and utilization of the Support System for Utility and Energy Management - SAGUE, a system created to assist in analysis of data collected from utilities systems as compressed air, vapor, water pumping systems, environmental conditioning systems and others, integrated with energy consumption and climatic measurements. The development of SAGUE was based on concepts and methodologies from Decision Support System as Data Warehouse and OLAP - Online Analytical Processing - in order to transform data measurements in information that guide the actions for energy conservation and rational utilization. The main characteristics of Data Warehouse and OLAP tools that influenced in the specifications and development of SAGUE are described in this work. In addition, this text deals with power management and energy management systems in order to present the environment that motivated the SAGUE development. Within this context, it is presented the Electrical Energy Management System - SISGEN, a system for energy management support, whose electrical measurements can be analyzed by SAGUE. The SAGUE utilization is presented in a case study that discusses the relation between electrical energy consumption of CUASO - Cidade Universitária Armando de Sales Oliveira, obtained throughout SISGEN, and the local temperature measurements supplied by IAG - Institute of Astronomic and Atmospheric Science of USP.
APA, Harvard, Vancouver, ISO, and other styles
4

Strand, Mattias. "External Data Incorporation into Data Warehouses." Doctoral thesis, Kista : Skövde : Dept. of computer and system sciences, Stockholm University : School of humanities and informatics, University of Skövde, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Belcin, Andrei. "Smart Cube Predictions for Online Analytic Query Processing in Data Warehouses." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41956.

Full text
Abstract:
A data warehouse (DW) is a transformation of many sources of transactional data integrated into a single collection that is non-volatile and time-variant that can provide decision support to managerial roles within an organization. For this application, the database server needs to process multiple users’ queries by joining various datasets and loading the result in main memory to begin calculations. In current systems, this process is reactionary to users’ input and can be undesirably slow. In previous studies, it was shown that a personalization scheme of a single user’s query patterns and loading the smaller subset into main memory the query response time significantly shortened the query response time. The LPCDA framework developed in this research handles multiple users’ query demands, and the query patterns are subject to change (so-called concept drift) and noise. To this end, the LPCDA framework detects changes in user behaviour and dynamically adapts the personalized smart cube definition for the group of users. Numerous data mart (DM)s, as components of the DW, are subject to intense aggregations to assist analytics at the request of automated systems and human users’ queries. Subsequently, there is a growing need to properly manage the supply of data into main memory that is in closest proximity to the CPU that computes the query in order to reduce the response time from the moment a query arrives at the DW server. As a result, this thesis proposes an end-to-end adaptive learning ensemble for resource allocation of cuboids within a a DM to achieve a relevant and timely constructed smart cube before the time in need, as a way of adopting the just-in-time inventory management strategy applied in other real-world scenarios. The algorithms comprising the ensemble involve predictive methodologies from Bayesian statistics, data mining, and machine learning, that reflect the changes in the data-generating process using a number of change detection algorithms. Therefore, given different operational constraints and data-specific considerations, the ensemble can, to an effective degree, determine the cuboids in the lattice of a DM to pre-construct into a smart cube ahead of users submitting their queries, thereby benefiting from a quicker response than static schema views or no action at all.
APA, Harvard, Vancouver, ISO, and other styles
6

Sobati, Moghadam Somayeh. "Contributions to Data Privacy in Cloud Data Warehouses." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2020.

Full text
Abstract:
Actuellement, les scénarios d’externalisation de données deviennent de plus en plus courants avec l’avènement de l’infonuagique. L’infonuagique attire les entreprises et les organisations en raison d’une grande variété d’avantages fonctionnels et économiques.De plus, l’infonuagique offre une haute disponibilité, le passage d’échelle et une reprise après panne efficace. L’un des services plus notables est la base de données en tant que service (Database-as-a-Service), où les particuliers et les organisations externalisent les données, le stockage et la gestion `a un fournisseur de services. Ces services permettent de stocker un entrepôt de données chez un fournisseur distant et d’exécuter des analysesen ligne (OLAP).Bien que l’infonuagique offre de nombreux avantages, elle induit aussi des problèmes de s´sécurité et de confidentialité. La solution usuelle pour garantir la confidentialité des données consiste à chiffrer les données localement avant de les envoyer à un serveur externe. Les systèmes de gestion de base de données sécurisés utilisent diverses méthodes de cryptage, mais ils induisent un surcoût considérable de calcul et de stockage ou révèlent des informations sur les données.Dans cette thèse, nous proposons une nouvelle méthode de chiffrement (S4) inspirée du partage secret de Shamir. S4 est un système homomorphique additif : des additions peuvent être directement calculées sur les données cryptées. S4 trait les points faibles des systèmes existants en réduisant les coûts tout en maintenant un niveau raisonnable de confidentialité. S4 est efficace en termes de stockage et de calcul, ce qui est adéquat pour les scénarios d’externalisation de données qui considèrent que l’utilisateur dispose de ressources de calcul et de stockage limitées. Nos résultats expérimentaux confirment l’efficacité de S4 en termes de surcoût de calcul et de stockage par rapport aux solutions existantes.Nous proposons également de nouveaux schémas d’indexation qui préservent l’ordre des données, OPI et waOPI. Nous nous concentrons sur le problème de l’exécution des requêtes exacts et d’intervalle sur des données chiffrées. Contrairement aux solutions existantes, nos systèmes empêchent toute analyse statistique par un adversaire. Tout en assurant la confidentialité des données, les schémas proposés présentent de bonnes performances et entraînent un changement minimal dans les logiciels existants
Nowadays, data outsourcing scenarios are ever more common with the advent of cloud computing. Cloud computing appeals businesses and organizations because of a wide variety of benefits such as cost savings and service benefits. Moreover, cloud computing provides higher availability, scalability, and more effective disaster recovery rather than in-house operations. One of the most notable cloud outsourcing services is database outsourcing (Database-as-a-Service), where individuals and organizations outsource data storage and management to a Cloud Service Provider (CSP). Naturally, such services allow storing a data warehouse (DW) on a remote, untrusted CSP and running on-line analytical processing (OLAP).Although cloud data outsourcing induces many benefits, it also brings out security and in particular privacy concerns. A typical solution to preserve data privacy is encrypting data locally before sending them to an external server. Secure database management systems use various encryption schemes, but they either induce computational and storage overhead or reveal some information about data, which jeopardizes privacy.In this thesis, we propose a new secure secret splitting scheme (S4) inspired by Shamir’s secret sharing. S4 implements an additive homomorphic scheme, i.e., additions can be directly computed over encrypted data. S4 addresses the shortcomings of existing approaches by reducing storage and computational overhead while still enforcing a reasonable level of privacy. S4 is efficient both in terms of storage and computing, which is ideal for data outsourcing scenarios that consider the user has limited computation and storage resources. Experimental results confirm the efficiency of S4 in terms of computation and storage overhead with respect to existing solutions.Moreover, we also present new order-preserving schemes, order-preserving indexing (OPI) and wrap-around order-preserving indexing (waOPI), which are practical on cloud outsourced DWs. We focus on the problem of performing range and exact match queries over encrypted data. In contrast to existing solutions, our schemes prevent performing statistical and frequency analysis by an adversary. While providing data privacy, the proposed schemes bear good performance and lead to minimal change for existing software
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
APA, Harvard, Vancouver, ISO, and other styles
9

Brito, Jaqueline Joice. "Processamento de consultas SOLAP drill-across e com junção espacial em data warehouses geográficos." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18022013-090739/.

Full text
Abstract:
Um data warehouse geográco (DWG) é um banco de dados multidimensional, orientado a assunto, integrado, histórico, não-volátil e geralmente organizado em níveis de agregação. Além disso, também armazena dados espaciais em uma ou mais dimensões ou em pelo menos uma medida numérica. Visando oferecer suporte à tomada de decisão, é possível realizar em DWGs consultas SOLAP (spatial online analytical processing ), isto é, consultas analíticas multidimensionais (e.g., drill-down, roll-up, drill-across ) com predicados espaciais (e.g., intersecta, contém, está contido) denidos para range queries e junções espaciais. Um desafio no processamento dessas consultas é recuperar, de forma eficiente, dados espaciais e convencionais em DWGs muito volumosos. Na literatura, existem poucos índices voltados à indexação de DWGs, e ainda assim nenhum desses índices dedica-se a indexar consultas SOLAP drill-across e com junção espacial. Esta dissertação visa suprir essa limitação, por meio da proposta de estratégias para o processamento dessas consultas complexas. Para o processamento de consultas SOLAP drill-across foram propostas duas estratégias, Divide e Única, além da especicação de um conjunto de diretrizes que deve ser seguido para o projeto de um esquema de DWG que possibilite a execução dessas consultas e da especicação de classes de consultas. Para o processamento de consultas SOLAP com junção espacial foi proposta a estratégia SJB, além da identicação de quais características o esquema de DWG deve possuir para possibilitar a execução dessas consultas e da especicação do formato dessas consultas. A validação das estratégias propostas foi realizada por meio de testes de desempenho considerando diferentes congurações, sendo que os resultados obtidos foram contrastados com a execução de consultas do tipo junção estrela e o uso de visões materializadas. Os resultados mostraram que as estratégias propostas são muito eficientes. No processamento de consultas SOLAP drill-across, as estratégias Divide e Única mostraram uma redução no tempo de 82,7% a 98,6% com relação à junção estrela e ao uso de visões materializadas. No processamento de consultas SOLAP com junção espacial, a estratégia SJB garantiu uma melhora de desempenho na grande maioria das consultas executadas. Para essas consultas, o ganho de desempenho variou de 0,3% até 99,2%
A geographic data warehouse (GDW) is a special kind of multidimensional database. It is subject-oriented, integrated, historical, non-volatile and usually organized in levels of aggregation. Furthermore, a GDW also stores spatial data in one or more dimensions or at least in one numerical measure. Aiming at decision support, GDWs allow SOLAP (spatial online analytical processing) queries, i.e., multidimensional analytical queries (e.g., drill-down, roll-up, drill-across) extended with spatial predicates (e.g., intersects, contains, is contained) dened for range and spatial join queries. A challenging issue related to the processing of these complex queries is how to recover spatial and conventional data stored in huge GDWs eciently. In the literature, there are few access methods dedicated to index GDWs, and none of these methods focus on drill-across and spatial join SOLAP queries. In this master\'s thesis, we propose novel strategies for processing these complex queries. We introduce two strategies for processing SOLAP drill-across queries (namely, Divide and Unique), dene a set of guidelines for the design of a GDW schema that enables the execution of these queries, and determine a set of classes of these queries to be issued over a GDW schema that follows the proposed guidelines. As for the processing of spatial join SOLAP queries, we propose the SJB strategy, and also identify the characteristics of a DWG schema that enables the execution of these queries as well as dene the format of these queries. We validated the proposed strategies through performance tests that compared them with the star join computation and the use of materialized views. The obtained results showed that our strategies are very ecient. Regarding the SOLAP drill-across queries, the Divide and Unique strategies showed a time reduction that ranged from 82,7% to 98,6% with respect to star join computation and the use of materialized views. Regarding the SOLAP spatial join queries, the SJB strategy guaranteed best results for most of the analyzed queries. For these queries, the performance gain of the SJB strategy ranged from 0,3% to 99,2% over the star join computation and the use of materialized view
APA, Harvard, Vancouver, ISO, and other styles
10

Abril, Raul Mario. "The inner and inter construct associations of the quality of data warehouse customer relationship data for problem enactment." Thesis, Brunel University, 2005. http://bura.brunel.ac.uk/handle/2438/7912.

Full text
Abstract:
The literature identifies perceptions of data quality as a key factor influencing a wide range of attitudes and behaviors related to data in organizational settings (e.g. decision confidence). In particular, there is an overwhelming consensus that effective customer relationship management, CRM, depends on the quality of customer data. Data warehouses, if properly implemented, enable data integration which is a key attribute of data quality. The literature highlights the relevance of formulating problem statements because this will determine the course of action. CRM managers formulate problem statements through a cognitive process known as enactment. The literature on data quality is very fragmented. It posits that this construct is of a high order nature (it is dimensional), it is contextual and situational, and it is closely linked to a utilitarian value. This study addresses all these disperse views of the nature of data quality from a holistic perspective. Social cognitive theory, SCT, is the backbone for studying data quality in terms of information search behavior and enhancements in formulating problem statements. The main objective of this study is to explore the nature of a data warehouse's customer relationship data quality in situations where there is a need for understanding a customer relationship problem. The research question is What are the inner and inter construct associations of the quality of data warehouse customer relationship data for problem enactment? To reach this objective, a positivistic approach was adopted complemented with qualitative interventions along the research process. Observations were gathered with a survey. Scales were adjusted using a construct-based approach. Research findings confirm that data quality is a high order construct with a contextual dimension and a situational dimension. Problem sense making enhancements is a dependent variable of data quality in a confirmed positive association between both constructs. Problem sense making enhancements is also a high order construct with a mastering experience dimension and a self-efficacy dimension. Behavioral patterns for information search mode (scanning mode orientation vs. focus mode orientation) and for information search heuristic (template heuristic orientation vs. trial-and-error heuristic orientation) have been identified. Focus is the predominant information search mode orientation and template is the predominant information search heuristic orientation. Overall, the research findings support the associations advocated by SCT. The self-efficacy dimension in problem sense making enhancements is a discriminant for information search mode orientation (focus mode orientation vs. scanning mode orientation). The contextual dimension in data quality (i.e. data task utility) is a discriminant for information search heuristic (template heuristic orientation vs. trial-and-error heuristic orientation). A data quality cognitive metamodel and a data quality for problem enactment model are suggested for research in the areas of data quality, information search behavior, and cognitive enhancements.
APA, Harvard, Vancouver, ISO, and other styles
11

Stowe, James DeWitt. "Throughput optimization of multi-agent robotic automated warehouses." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104388.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-107).
In 2003 Kiva Systems (now Amazon Robotics) introduced a new type material handling automation to the world. The system is based on the principle that the physical infrastructure that contains inventory should be mobile. Kiva achieved this remarkable advancement by employing a fleet of robots to move shelving to human operators. Broadly, these types of systems are defined in the literature as multi-agent robotic systems. Amazon acquired Kiva Systems in 2012 to incorporate the technology into their operations. The goal of this thesis is to optimize the throughput of warehouses employing multi-agent robotic automation. It is assumed that extracting inventory from the automated system is the limiting factor in maximizing throughput (i.e. downstream process are unconstrained). Two strategies are advocated: 1) performing velocity segregation of inventory within the automation via a bifurcation between fast selling and slow selling inventory, 2) maximizing pick rates through policies that increase worker retention. It will be shown that velocity segregation increases machine efficiency by increasing the efficiency of delivering inventory to human operators. This assertion will be investigated by developing a theoretical understanding of how inventory velocity impacts machine efficiency and simulating different types of stow strategies impact on system efficiency. It is estimated that some stow strategies can increase machine efficiency by as much as 30%. It will also be shown that the number of man-hours worked by inexperienced pickers explains practically all of the variability of aggregate pick cycle times and hence pick rates, which motivates the argument for worker retention. Together, these two modifications are estimated to increase throughput by 10% over current baseline.
by James DeWitt Stowe.
M.B.A.
S.M. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
12

Pieringer, Roland. "Modeling and implementing multidimensional hierarchically structured data for data warehouses in relational database management systems and the implementation into transbase." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=969373791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Anibal, Luana Peixoto. "Istar : um esquema estrela otimizado para Image Data Warehouses baseado em similaridade." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/484.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1 3993.pdf: 3294402 bytes, checksum: 982c043143364db53c8a4e2084205995 (MD5) Previous issue date: 2011-08-26
A data warehousing environment supports the decision-making process through the investigation and analysis of data in an organized and agile way. However, the current data warehousing technologies do not allow that the decision-making processe be carried out based on images pictorial (intrinsic) features. This analysis can not be carried out in a conventional data warehousing because it requires the management of data related to the intrinsic features of the images to perform similarity comparisons. In this work, we propose a new data warehousing environment called iCube to enable the processing of OLAP perceptual similarity queries over images, based on their pictorial (intrinsic) features. Our approach deals with and extends the three main phases of the traditional data warehousing process to allow the use of images as data. For the data integration phase, or ETL phase, we propose a process to represent the image by its intrinsic content (such as color or texture numerical descriptors) and integrate this data with conventional data in the DW. For the dimensional modeling phase, we propose a star schema, called iStar, that stores both the intrinsic and the conventional image data. Moreover, at this stage, our approach models the schema to represent and support the use of different user-defined perceptual layers. For the data analysis phase, we propose an environment in which the OLAP engine uses the image similarity as a query predicate. This environment employs a filter mechanism to speed-up the query execution. The iStar was validated through performance tests for evaluating both the building cost and the cost to process IOLAP queries. The results showed that our approach provided an impressive performance improvement in IOLAP query processing. The performance gain of the iCube over the best related work (i.e. SingleOnion) was up to 98,21%.
Um ambiente de data warehousing (DWing) auxilia seus usuários a tomarem decisões a partir de investigações e análises dos dados de maneira organizada e ágil. Entretanto, os atuais recursos de DWing não possibilitam que o processo de tomada de decisão seja realizado com base em comparações do conteúdo intrínseco de imagens. Esta análise não pode ser realizada por aplicações de DW convencionais porque essa utiliza, como base, imagens digitais e necessita realizar operações baseadas em similaridade, para as quais um DW convencional não oferece suporte. Neste trabalho, é proposto um ambiente de data warehouse chamado iCube que provê suporte ao processamento de consultas IOLAP (Image On-Line Analytical Processing) baseadas em diversas percepções de similaridade entre as imagens. O iCube realiza adaptações nas três principais fases de um ambiente de data warehousing convencional para permitir o uso de imagens como dados de um data warehouse (DW). Para a fase de integração, ou fase ETL (Extract, Trasnform and Load), nós propomos um processo para representar as imagens a partir de seu conteúdo intrínseco (i.e., por exemplo por meio de descritores numéricos que representam cor ou textura dessas imagens) e integrar esse conteúdo intrínseco a dados convencionais em um DW. Neste trabalho, nós também propomos um esquema estrela otimizado para o iCube, denominado iStar, que armazena tanto dados convencionais quanto dados de representação do conteúdo intrínseco das imagens. Ademais, nesta fase, o iStar foi projetado para representar e prover suporte ao uso de diferentes camadas perceptuais definidas pelo usuário. Para a fase de análise de dados, o iCube permite que processos OLAP sejam executados com o uso de comparações de similaridade como predicado de consultas e com o uso de mecanismos de filtragem para acelerar o processamento de consultas OLAP. O iCube foi validado a partir de testes de desempenho para a construção da estrutura e para o processamento de consultas IOLAP. Os resultados demonstraram que o iCube melhora significativamente o desempenho no processamento de consultas IOLAP quando comparado aos atuais recursos de IDWing. Os ganhos de desempenho do iCube contra o melhor trabalho correlato (i.e. SingleOnion) foram de até 98,21%.
APA, Harvard, Vancouver, ISO, and other styles
14

Griffin, Alan R., and R. Stephen Wooten. "AUTOMATED DATA MANAGEMENT IN A HIGH-VOLUME TELEMETRY DATA PROCESSING ENVIRONMENT." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608908.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
The vast amount of data telemetered from space probe experiments requires careful management and tracking from initial receipt through acquisition, archiving, and distribution. This paper presents the automated system used at the Phillips Laboratory, Geophysics Directorate, for tracking telemetry data from its receipt at the facility to its distribution on various media to the research community. Features of the system include computerized databases, automated generation of media labels, automated generation of reports, and automated archiving.
APA, Harvard, Vancouver, ISO, and other styles
15

容勁 and King Stanley Yung. "Application of multi-agent technology to supply chain management." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31223886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Calin, Beatrice Andreea. "Manufacturing Analytics Dashboard: analisi efficienza ed efficacia dei processi produttivi tramite indicatore OEE basata su un MES Data Warehouse." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
La tesi descrive il progetto volto a definire una dashboard che analizza e visualizza visivamente le prestazioni, in particolare l'efficienza e l'efficacia dei processi produttivi, di una piccola impresa emiliano-romagnola, Mollificio Padano srl. Nella prima parte si presentano i concetti teorici utili per la definizione di un cruscotto aziendale, in particolare il concetto di Industria4.0, dei sistemi MES, di KPI (Key Performance Indicator) ed infine di OEE (Overall Equipment Effectiveness) . Inoltre si descrivono gli obiettivi e l'architettura utilizzata. La seconda parte invece tratta della realizzazione della dashboard e la discussione dei risultati ottenuti tramite l'analisi dei dati concludendo con una riflessione riguardo agli sviluppi futuri del progetto.
APA, Harvard, Vancouver, ISO, and other styles
17

Bashir, Omar. "Management and processing of network performance information." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/10361.

Full text
Abstract:
Intrusive monitoring systems monitor the performance of data communication networks by transmitting and receiving test packets on the network being monitored. Even relatively small periods of monitoring can generate significantly large amounts of data. Primitive network performance data are details of test packets that are transmitted and received over the network under test. Network performance information is then derived by significantly processing the primitive performance data. This information may need to be correlated with information regarding the configuration and status of various network elements and the test stations. This thesis suggests that efficient processing of the collected data may be achieved by reusing and recycling the derived information in the data warehouses and information systems. This can be accomplished by pre-processing the primitive performance data to generate Intermediate Information. In addition to being able to efficiently fulfil multiple information requirements, different Intermediate Information elements at finer levels of granularity may be recycled to generate Intermediate Information elements at coarser levels of granularity. The application of these concepts in processing packet delay information from the primitive performance data has been studied. Different Intermediate Information structures possess different characteristics. Information systems can exploit these characteristics to efficiently re-cycle elements of these structures to derive the required information elements. Information systems can also dynamically select appropriate Intermediate Information structures on the basis of queries posted to the information system as well as the number of suitable Intermediate Information elements available to efficiently answer these queries. Packet loss and duplication summaries derived for different analysis windows also provide information regarding the network performance characteristics. Due to their additive nature, suitable finer granularity packet loss and duplication summaries can be added to provide coarser granularity packet loss and duplication summaries.
APA, Harvard, Vancouver, ISO, and other styles
18

Agarwalla, Bikash Kumar. "Resource management for data streaming applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34836.

Full text
Abstract:
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
APA, Harvard, Vancouver, ISO, and other styles
19

Mousavi, Bamdad. "Scalable Stream Processing and Management for Time Series Data." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42295.

Full text
Abstract:
There has been an enormous growth in the generation of time series data in the past decade. This trend is caused by widespread adoption of IoT technologies, the data generated by monitoring of cloud computing resources, and cyber physical systems. Although time series data have been a topic of discussion in the domain of data management for several decades, this recent growth has brought the topic to the forefront. Many of the time series management systems available today lack the necessary features to successfully manage and process the sheer amount of time series being generated today. In this today we stive to examine the field and study the prior work in time series management. We then propose a large system capable of handling time series management end to end, from generation to consumption by the end user. Our system is composed of open-source data processing frameworks. Our system has the capability to collect time series data, perform stream processing over it, store it for immediate and future processing and create necessary visualizations. We present the implementation of the system and perform experimentations to show its scalability to handle growing pipelines of incoming data from various sources.
APA, Harvard, Vancouver, ISO, and other styles
20

Stein, Oliver. "Intelligent Resource Management for Large-scale Data Stream Processing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391927.

Full text
Abstract:
With the increasing trend of using cloud computing resources, the efficient utilization of these resources becomes more and more important. Working with data stream processing is a paradigm gaining in popularity, with tools such as Apache Spark Streaming or Kafka widely available, and companies are shifting towards real-time monitoring of data such as sensor networks, financial data or anomaly detection. However, it is difficult for users to efficiently make use of cloud computing resources and studies show that a lot of energy and compute hardware is wasted. We propose an approach to optimizing resource usage in cloud computing environments designed for data stream processing frameworks, based on bin packing algorithms. Test results show that the resource usage is substantially improved as a result, with future improvements suggested to further increase this. The solution was implemented as an extension of the HarmonicIO data stream processing framework and evaluated through simulated workloads.
APA, Harvard, Vancouver, ISO, and other styles
21

Reinhard, Erik. "Scheduling and data management for parallel ray tracing." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wilke, Achim. "Data-processing devolopment in German design offices." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tucker, Peter A. "Punctuated data streams /." Full text open access at:, 2005. http://content.ohsu.edu/u?/etd,255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Darrous, Jad. "Scalable and Efficient Data Management in Distributed Clouds : Service Provisioning and Data Processing." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN077.

Full text
Abstract:
Cette thèse porte sur des solutions pour la gestion de données afin d'accélérer l'exécution efficace d'applications de type « Big Data » (très consommatrices en données) dans des centres de calculs distribués à grande échelle. Les applications de type « Big Data » sont de plus en plus souvent exécutées sur plusieurs sites. Les deux principales raisons de cette tendance sont 1) le déplacement des calculs vers les sources de données pour éliminer la latence due à leur transmission et 2) le stockage de données sur un site peut ne pas être réalisable à cause de leurs tailles de plus en plus importantes.La plupart des applications s'exécutent sur des clusters virtuels et nécessitent donc des images de machines virtuelles (VMI) ou des conteneurs d’application. Par conséquent, il est important de permettre l’approvisionnement rapide de ces services afin de réduire le temps d'attente avant l’exécution de nouveaux services ou applications. Dans la première partie de cette thèse, nous avons travaillé sur la récupération et le placement des données, en tenant compte de problèmes difficiles, notamment l'hétérogénéité des connexions au réseau étendu (WAN) et les besoins croissants en stockage pour les VMIs et les conteneurs d’application.Par ailleurs, les applications de type « Big Data » reposent sur la réplication pour fournir des services fiables et rapides, mais le surcoût devient de plus en plus grand. La seconde partie de cette thèse constitue l'une des premières études sur la compréhension et l'amélioration des performances des applications utilisant la technique, moins coûteuse en stockage, des codes d'effacement (erasure coding), en remplacement de la réplication
This thesis focuses on scalable data management solutions to accelerate service provisioning and enable efficient execution of data-intensive applications in large-scale distributed clouds. Data-intensive applications are increasingly running on distributed infrastructures (multiple clusters). The main two reasons for such a trend are 1) moving computation to data sources can eliminate the latency of data transmission, and 2) storing data on one site may not be feasible given the continuous increase of data size.On the one hand, most applications run on virtual clusters to provide isolated services, and require virtual machine images (VMIs) or container images to provision such services. Hence, it is important to enable fast provisioning of virtualization services to reduce the waiting time of new running services or applications. Different from previous work, during the first part of this thesis, we worked on optimizing data retrieval and placement considering challenging issues including the continuous increase of the number and size of VMIs and container images, and the limited bandwidth and heterogeneity of the wide area network (WAN) connections.On the other hand, data-intensive applications rely on replication to provide dependable and fast services, but it became expensive and even infeasible with the unprecedented growth of data size. The second part of this thesis provides one of the first studies on understanding and improving the performance of data-intensive applications when replacing replication with the storage-efficient erasure coding (EC) technique
APA, Harvard, Vancouver, ISO, and other styles
25

Monk, Kitty A. "Data management in MARRS." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Xie, Tian, and 謝天. "Development of a XML-based distributed service architecture for product development in enterprise clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30477165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tidmus, Jonathan Paul. "Task and data management for parallel particle tracing." Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Görlitz, Olaf [Verfasser]. "Distributed query processing for federated RDF data management / Olaf Görlitz." Koblenz : Universitätsbibliothek Koblenz, 2015. http://d-nb.info/1065246986/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Pitts, David Vernon. "A storage management system for a reliable distributed operating system." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/16895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tao, Yufei. "Indexing and query processing of spatio-temporal data /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20TAO.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 208-215). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
31

Chitondo, Pepukayi David Junior. "Data policies for big health data and personal health data." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2479.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016.
Health information policies are constantly becoming a key feature in directing information usage in healthcare. After the passing of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009 and the Affordable Care Act (ACA) passed in 2010, in the United States, there has been an increase in health systems innovations. Coupling this health systems hype is the current buzz concept in Information Technology, „Big data‟. The prospects of big data are full of potential, even more so in the healthcare field where the accuracy of data is life critical. How big health data can be used to achieve improved health is now the goal of the current health informatics practitioner. Even more exciting is the amount of health data being generated by patients via personal handheld devices and other forms of technology that exclude the healthcare practitioner. This patient-generated data is also known as Personal Health Records, PHR. To achieve meaningful use of PHRs and healthcare data in general through big data, a couple of hurdles have to be overcome. First and foremost is the issue of privacy and confidentiality of the patients whose data is in concern. Secondly is the perceived trustworthiness of PHRs by healthcare practitioners. Other issues to take into context are data rights and ownership, data suppression, IP protection, data anonymisation and reidentification, information flow and regulations as well as consent biases. This study sought to understand the role of data policies in the process of data utilisation in the healthcare sector with added interest on PHRs utilisation as part of big health data.
APA, Harvard, Vancouver, ISO, and other styles
32

Laribi, Atika. "A protection model for distributed data base management systems." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53883.

Full text
Abstract:
Security is important for Centralized Data Base Management Systems (CDBMS) and becomes crucial for Distributed Data Base Management Systems (DDBMS) when different organizations share information. Secure cooperation can be achieved only if each participating organization is assured that the data it makes available will not be abused by other users. In this work differences between CDBMS and DDBMS that characterize the nature of the protection problem in DDBMS are identified. These differences are translated into basic protection requirements. Policies that a distributed data base management protection system should allow are described. The system proposed in this work is powerful enough to satisfy the stated requirements and allow for variations on the policies. This system is a hybrid one where both authorizations and constraints can be defined. The system is termed hybrid because it combines features of both open and closed protection systems. In addition the hybrid system, although designed to offer the flexibility of discretionary systems, incorporates the flow control of information between users, a feature found only in some nondiscretionary systems. Furthermore, the proposed system is said to be integrated because authorizations and constraints can be defined on any of the data bases supported by the system including the data bases containing the authorizations, and the constraints themselves. The hybrid system is incorporated in a general model of DDBMS protection. A modular approach is taken for the design of the model. This approach allows us to represent the different options for the model depending on the set of policy choices taken. Three levels of abstraction describing different aspects of DDBMS protection problems are defined. The conceptual level describes the protection control of the DDBMS transactions and information flows. The logical level is concerned with the interaction between the different organizations participating in the DDBMS. The physical level is involved with the architectural implementation of the logical level.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Jianbin, and 趙建賓. "A portalet-based DIY approach to collaborative product commerce." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B27769793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tacic, Ivan. "Efficient Synchronized Data Distribution Management in Distributed Simulations." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6822.

Full text
Abstract:
Data distribution management (DDM) is a mechanism to interconnect data producers and data consumers in a distributed application. Data producers provide useful data to consumers in the form of messages. For each message produced, DDM determines the set of data consumers interested in receiving the message and delivers it to those consumers. We are particularly interested in DDM techniques for parallel and distributed discrete event simulations. Thus far, researchers have treated synchronization of events (i.e. time management) and DDM independent of each other. This research focuses on how to realize time managed DDM mechanisms. The main reason for time-managed DDM is to ensure that changes in the routing of messages from producers to consumers occur in a correct sequence. Also time managed DDM avoids non-determinism in the federation execution, which may result in non-repeatable executions. An optimistic approach to time managed DDM is proposed where one allows DDM events to be processed out of time stamp order, but a detection and recovery procedure is used to recover from such errors. These mechanisms are tailored to the semantics of the DDM operations to ensure an efficient realization. A correctness proof is presented to verify the algorithm correctly synchronizes DDM events. We have developed a fully distributed implementation of the algorithm within the framework of the Georgia Tech Federated Simulation Development Kit (FDK) software. A performance evaluation of the synchronized DDM mechanism has been completed in a loosely coupled distributed system consisting of a network of workstations connected over a local area network (LAN). We compare time-managed versus unsynchronized DDM for two applications that exercise different mobility patterns: one based on a military simulation and a second utilizing a synthetic workload. The experiments and analysis illustrate that synchronized DDM performance depends on several factors: the simulations model (e.g. lookahead), applications mobility patterns and the network hardware (e.g. size of network buffers). Under certain mobility patterns, time-managed DDM is as efficient as unsynchronized DDM. There are also mobility patterns where time-managed DDM overheads become significant, and we show how they can be reduced.
APA, Harvard, Vancouver, ISO, and other styles
35

Oelofse, Andries Johannes. "Development of a MAIME-compliant microarray data management system for functional genomics data integration." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08222007-135249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Persson, Mathias. "Simultaneous Data Management in Sensor-Based Systems using Disaggregation and Processing." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188856.

Full text
Abstract:
To enable high performance data management for sensor-based systems the system components in an architecture has to be tailored to the situation at hand. Therefore, each component has to handle a massive amount of data independently, and at the same time cooperate with other components within a system. To facilitate rapid data processing between components, a model detailing the flow of information and specifying internal component structures will assist in faster and more reliable system designs. This thesis presents a model for a scalable, safe, reliable and high performing system for managing sensor-based data. Based on the model a prototype is developed that can be used to handle a large amount of messages from various distributed sensors. The different components within the prototype are evaluated and their advantages and disadvantages are presented. The result merits the architecture of the prototype and validates the initial requirements of how it should operate to achieve high performance. By combining components with individual advantages, a system can be designed that allows a high amount of simultaneous data to be disaggregated into its respective category, processed to make the information usable and stored in a database for easy access to interested parties.
Om ett system som hanterar sensorbaserad data ska kunna prestera bra måste komponenterna som ingår i systemet vara skräddarsydda för att hantera olika situationer. Detta betyder att varje enskild komponent måste individuellt kunna hantera stora simultana datamängder, samtidigt som de måste samarbeta med de andra komponenterna i systemet. För att underlätta snabb bearbetning av data mellan komponenter kan en modell, som specificerar informationsflödet och interna strukturer hos komponenterna, assistera i skapande av snabbare och mer tillförlitliga systemarkitekturer. I denna uppsats presenteras en modell för skapande av skalbara, säkra, tillförlitliga och bra presterande system som hanterar sensor-baserad data. En prototyp utvecklas, baserad på modellen, som kan hantera en stor mängd meddelanden från distribuerade sensorer. De olika komponenterna som används i prototypen utvärderas och deras för- och nackdelar presenteras. Resultatet visar att arkitekturen hos prototypen fungerar enligt de initiala kraven om hur bra systemet ska prestera. Genom att kombinera individuella styrkor hos komponenterna kan ett system skapas som tillåter stora mängder data att bli fördelat enligt deras typ, behandlat för att få fram relevant information och lagrat i en databas för enkel tillgång.
APA, Harvard, Vancouver, ISO, and other styles
37

Slingsby, T. P. "An investigation into the development of a facilities management system for the University of Cape Town." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/5585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fernández, Moctezuma Rafael J. "A Data-Descriptive Feedback Framework for Data Stream Management Systems." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/116.

Full text
Abstract:
Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of these opportunities are performed in the query operators. DSMSs are also concerned with state management, in particular, state derived from tuple processing. The proposed research also introduces the Contracts Framework, which provides execution guarantees about state purging in continuous query evaluation for systems with and without inter-operator feedback. This research provides both theoretical and design contributions. The research also includes an implementation and evaluation of the feedback techniques in the NiagaraST DSMS, and a reference implementation of the Contracts Framework.
APA, Harvard, Vancouver, ISO, and other styles
39

Mohamad, Baraa. "Medical Data Management on the cloud." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22582.

Full text
Abstract:
Résumé indisponible
Medical data management has become a real challenge due to the emergence of new imaging technologies providing high image resolutions.This thesis focuses in particular on the management of DICOM files. DICOM is one of the most important medical standards. DICOM files have special data format where one file may contain regular data, multimedia data and services. These files are extremely heterogeneous (the schema of a file cannot be predicted) and have large data sizes. The characteristics of DICOM files added to the requirements of medical data management in general – in term of availability and accessibility- have led us to construct our research question as follows:Is it possible to build a system that: (1) is highly available, (2) supports any medical images (different specialties, modalities and physicians’ practices), (3) enables to store extremely huge/ever increasing data, (4) provides expressive accesses and (5) is cost-effective .In order to answer this question we have built a hybrid (row-column) cloud-enabled storage system. The idea of this solution is to disperse DICOM attributes thoughtfully, depending on their characteristics, over both data layouts in a way that provides the best of row-oriented and column-oriented storage models in one system. All with exploiting the interesting features of the cloud that enables us to ensure the availability and portability of medical data. Storing data on such hybrid data layout opens the door for a second research question, how to process queries efficiently over this hybrid data storage with enabling new and more efficient query plansThe originality of our proposal comes from the fact that there is currently no system that stores data in such hybrid storage (i.e. an attribute is either on row-oriented database or on column-oriented one and a given query could interrogate both storage models at the same time) and studies query processing over it.The experimental prototypes implemented in this thesis show interesting results and opens the door for multiple optimizations and research questions
APA, Harvard, Vancouver, ISO, and other styles
40

Paul, Daniel. "Decision models for on-line adaptive resource management." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hoffman, A. R. "Information technology decision making in South Africa : a framework for company-wide strategic IT management." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/15854.

Full text
Abstract:
Includes bibliography.
The area of interest in which this Study is set is the linking of a company's business strategies with its strategic planning for IT (information technology). The objectives of the Study are: to investigate how the IT planning environment is changing for business enterprises in South Africa; to establish how successfully South African companies are managing IT strategically; to propose a new approach to strategic IT decision making that will help South African management deal with the major issues; to propose a way of implementing the approach. In Chapter 2, conclusions are drawn from an examination of the key strategic IT planning literature. It appears that fundamental changes are indeed taking place, and are producing significant shifts in the way researchers, consultants and managers think about IT. The survey of South African management opinion is described in Chapter 3. The opinions analyzed range over environmental trends, strategic decision making practices, and what an acceptable strategic IT decision making framework would look like. The need for a new, comprehensive approach to strategic IT decision making in South Africa is clearly established. In Chapter 4, a theoretical Framework is proposed as a new, comprehensive approach to strategic IT decision making. The Framework covers five strategic tasks: analysing the key environmental issues; determining the purposes and uses of IT in competitive strategy and organizational designs; developing the IT infrastructure, human systems, information systems, and human resources to achieve these purposes and uses; implementing the strategic IT decisions; and learning to make better strategic IT decisions. In Chapter 5, ways of implementing the Framework in practice are .identified. A means of evaluating its acceptability in a specific company is also proposed. The general conclusions of the Study are presented in Chapter 6. The Framework developed in this Study is intended for use, not directly by the IT decision makers themselves, but by the persons responsible for designing the IT decision making processes of the company. It is not, however, offered as a theory or a methodology. The aim is· simply to provide a conceptual "filing system", to help designers uncover and classify the IT strategy problems of their own company, to identify the tools their decision makers need, and to put appropriate problem solving processes in place.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Deji. "Real-time data management in the distributed environment /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lynch, Kevin John. "Data manipulation in collaborative research systems." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184923.

Full text
Abstract:
This dissertation addresses data manipulation in collaborative research systems, including what data should be stored, the operations to be performed on that data, and a programming interface to effect this manipulation. Collaborative research systems are discussed, and requirements for next-generation systems are specified, incorporating a range of emerging technologies including multimedia storage and presentation, expert systems, and object-oriented database management systems. A detailed description of a generic query processor constructed specifically for one collaborative research system is given, and its applicability to next-generation systems and emerging technologies is examined. Chapter 1 discusses the Arizona Analyst Information System (AAIS), a successful collaborative research system being used at the University of Arizona and elsewhere. Chapter 2 describes the generic query processing approach used in the AAIS, as an efficient, nonprocedural, high-level programmer interface to databases. Chapter 3 specifies requirements for next-generation collaborative research systems that encompass the entire research cycle for groups of individuals working on related topics over time. These requirements are being used to build a next-generation collaborative research system at the University of Arizona called CARAT, for Computer Assisted Research and Analysis Tool. Chapter 4 addresses the underlying data management systems in terms of the requirements specified in Chapter 3. Chapter 5 revisits the generic query processing approach used in the AAIS, in light of the requirements of Chapter 3, and the range of data management solutions described in Chapter 4. Chapter 5 demonstrates the generic query processing approach as a viable one, for both the requirements of Chapter 3 and the DBMSs of Chapter 4. The significance of this research takes several forms. First, Chapters 1 and 3 provide detailed views of a current collaborative research system, and of a set of requirements for next-generation systems based on years of experience both using and building the AAIS. Second, the generic query processor described in Chapters 2 and 5 is shown to be an effective, portable programming language to database interface, ranging across the set of requirements for collaborative research systems as well as a number of underlying data management solutions.
APA, Harvard, Vancouver, ISO, and other styles
44

Benatar, Gil. "Thermal/structural integration through relational database management." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/19484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Nehme, Rimma V. "Continuous query processing on spatio-temporal data streams." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082305-154035/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Roy, Amber Joyce. "Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations." Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2699/.

Full text
Abstract:
Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
47

Paul, Debashis. "A methodology for assessing computer software applicability to inventory and facility management." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/43085.

Full text
Abstract:
Computer applications have become popular and widespread in architecture and other related fields. While the architect uses a computer for design and construction of a building, the user takes the advantage of computer for maintenance of the building. Inventory and facility management are two such fields where computer applications have become predominant. The project has investigated the use and application of different commercially available computer software in the above mentioned fields. A set of user requirements for inventory and facility management were established for different organizations. Four different types of software were chosen to examine their capabilities for fulfilling the requirements. Software from different vendors were chosen to compare and study the feasibility of application of each. The process of evaluation has been developed as a methodology for assessing different computer software applications in inventory and facility management: Special software applications and hardware considerations for developing computer-aided inventory and facility management, has also been discussed. The documentation and evaluation of software shall provide a person the basic knowledge of computer applications in inventory and facility management. The study shall also help building managers and facility managers develop their own criteria for choosing computer software to fulfill their particular requirements
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Chan, Sze-hang, and 陳思行. "Competitive online job scheduling algorithms under different energy management models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/206690.

Full text
Abstract:
Online flow-time scheduling is a fundamental problem in computer science and has been extensively studied for years. It is about how to design a scheduler to serve computer jobs with unpredictable arrival times and varying sizes and priorities so as to minimize the total flow time (better understood as response time) of jobs. It has many applications, most notable in the operating of server farms. As energy has become an important issue, the design of scheduler also has to take power management into consideration, for example, how to scale the speed of the processors dynamically. The objectives are orthogonal as one would prefer lower processor speed to save energy, yet a good quality of service must be retained. In this thesis, I study a few scheduling problems for energy and flow time in depth and give new algorithms to tackle them. The competitiveness of our algorithms is guaranteed with worst-case mathematical analysis against the best possible or hypothetical solutions. In the speed scaling model, the power of a processor increases with its speed according to a certain function (e.g., a cubic function of speed). Among all online scheduling problems with speed scaling, the nonclairvoyant setting (in which the size of a job is not known during its execution) with arbitrary priorities is perhaps the most challenging. This thesis gives the first competitive algorithm called WLAPS for this setting. In reality, it is not uncommon that during the peak-load period, some (low-priority) users have their jobs rejected by the servers. This triggers me to study more complicated scheduling algorithms that can strike a good balance among speed scaling, flow time and rejection penalty. Two new algorithms UPUW and HDFAC for different models of rejection penalty have been proposed and analyzed. Last, but perhaps the most interesting, we study power management in large server farm environment in which the primary energy saving mechanism is to put some processors to sleep. Two new algorithms POOL and SATA have been designed to tackle jobs that cannot and can migrate among the processors, respectively. They are integrated algorithms that can consider speed scaling, job scheduling and processor sleep management together to optimize the energy usage and ow time simultaneously. These algorithms are again proven mathematically to be competitive even in the worst case.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Junxian. "Online hotel booking system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3083.

Full text
Abstract:
The Online Hotel Booking System was developed to allow customers to use a web browser to book a hotel, change the booking details, cancel the booking, change the personal profile, view the booking history, or view the hotel information through a GUI (graphical user interface). The system is implemented in PHP (Hypertext Preprocessor) and HTML (Hyper Text Markup Language).
APA, Harvard, Vancouver, ISO, and other styles
50

Hatchell, Brian. "Data base design for integrated computer-aided engineering." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography