Dissertations / Theses on the topic 'Database Operators'

To see the other types of publications on this topic, follow the link: Database Operators.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Database Operators.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sjö, Kristoffer. "Semantics and Implementation of Knowledge Operators in Approximate Databases." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2438.

Full text
Abstract:

In order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database:

* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas.

* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.

Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.

APA, Harvard, Vancouver, ISO, and other styles
2

Müller, Ingo [Verfasser], and P. [Akademischer Betreuer] Sanders. "Engineering Aggregation Operators for Relational In-Memory Database Systems / Ingo Müller. Betreuer: P. Sanders." Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1106329953/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amenabar, Leire, and Leire Carreras. "Augmented Reality Framework for Supporting and Monitoring Operators during Maintenance Operations in Industrial Environments." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15717.

Full text
Abstract:
In an ever-changing and demanding world where short assembly and innovation times are indispensable, it is of paramount importance to ensure that the machinery used throughout the whole process of a product are in their best possible condition. This guarantees that the performance of each machine will be optimal, and hence, the process times will be the shortest possible, while the best quality products are obtained. Moreover, having a machine in an impeccable status permits making the necessary changes to it, in order to fulfil the requirements that a more advanced or complex product may have. Maintenance operations and their corresponding trainings have historically been time-consuming, and a vast amount of information has been transmitted from an expert to a newer operator. This means that there has been the need of working with experienced operators to secure that a good service is provided. However, different technologies like augmented reality (AR) have been shown to have a positive impact in the support and monitoring of operators in industrial maintenance operations.The present project gathers information in regard to the framework of AR, with the aim of supporting and monitoring operators in industrial environments. The proposed method consists on the development of an artefact, which would lead to a possible improvement of the already existing solutions. It is believed that the development of an AR application could grant the necessary aid to any operator in maintenance operations. The result of this suggestion is an AR application which superimposes visual information on the physical equipment.
APA, Harvard, Vancouver, ISO, and other styles
4

McCormick, II Donald W. "Towards A Sufficient Set of Mutation Operators for Structured Query Language (SQL)." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/32526.

Full text
Abstract:
Test suites for database applications depend on adequate test data and real-world test faults for success. An automated tool is available that quantifies test data coverage for database queries written in SQL. An automated tool is also available that mimics real-world faults by mutating SQL, however tests have revealed that these simulated faults do not completely represent real-world faults. This paper demonstrates how half of the mutation operators used by the SQL mutation tool in real-world test suites generated significantly lower detection scores than those from research test suites. Three revised mutation operators are introduced that improve detection scores and contribute toward re-defining a sufficient set of mutation operators for SQL. Finally, a procedure is presented that reduces the test burden by automatically comparing SQL mutants with their original queries.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

BHIDE, ASHWINI M. "ANALYSIS OF ACCIDENTS AND INJURIES OF CONSTRUCTION EQUIPMENT OPERATORS." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1147378056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jäkel, Tobias. "Role-based Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-224416.

Full text
Abstract:
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
APA, Harvard, Vancouver, ISO, and other styles
7

Gonzaga, André dos Santos. "The Similarity-aware Relational Division Database Operator." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17112017-135006/.

Full text
Abstract:
In Relational Algebra, the operator Division (÷) is an intuitive tool used to write queries with the concept of for all, and thus, it is constantly required in real applications. However, as we demonstrate in this MSc work, the division does not support many of the needs common to modern applications, particularly those that involve complex data analysis, such as processing images, audio, genetic data, large graphs, fingerprints, and many other non-traditional data types. The main issue is the existence of intrinsic comparisons of attribute values in the operator, which, by definition, are always performed by identity (=), despite the fact that complex data must be compared by similarity. Recent works focus on supporting similarity comparison in relational operators, but no one treats the division. MSc work proposes the new Similarity-aware Division (÷) operator. Our novel operator is naturally well suited to answer queries with an idea of candidate elements and exigencies to be performed on complex data from real applications of high-impact. For example, it is potentially useful to support agriculture, genetic analyses, digital library search, and even to help controlling the quality of manufactured products and identifying new clients in industry. We validate our proposal by studying the first two of these applications.
O operador de Divisão (÷) da Álgebra Relacional permite representar de forma simples consultas com o conceito de para todos, e por isso é requerido em diversas aplicações reais. Entretanto, evidencia-se neste trabalho de mestrado que a divisão não atende às necessidades de diversas aplicações atuais, principalmente quando estas analisam dados complexos, como imagens, áudio, textos longos, impressões digitais, entre outros. Analisando o problema verifica-se que a principal limitação é a existência de comparações de valores de atributos intrínsecas à Divisão Relacional, que, por definição, são efetuadas sempre por identidade (=), enquanto objetos complexos devem geralmente ser comparados por similaridade. Hoje, encontram-se na literatura propostas de operadores relacionais com suporte à similaridade de objetos complexos, entretanto, nenhuma trata a Divisão Relacional. Este trabalho de mestrado propõe investigar e estender o operador de Divisão da Álgebra Relacional para melhor adequá-lo às demandas de aplicações atuais, por meio de suporte a comparações de valores de atributos por similaridade. Mostra-se aqui que a Divisão por Similaridade é naturalmente adequada a responder consultas diversas com um conceito de elementos candidatos e exigências descrito na monografia, envolvendo dados complexos de aplicações reais de alto impacto, com potencial por exemplo, para apoiar a agricultura, análises de dados genéticos, buscas em bibliotecas digitais, e até mesmo para controlar a qualidade de produtos manufaturados e a identificação de novos clientes em indústrias. Para validar a proposta, propõe-se estudar as duas primeiras aplicações citadas.
APA, Harvard, Vancouver, ISO, and other styles
8

Liknes, Stian. "Database Operations on Multi-Core Processors." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22990.

Full text
Abstract:
The focus of this thesis is on investigating efficient database algorithmsand methods for modern multi-core processors in main memory environments.We describe central features of modern processors in a historic perspectivebefore presenting a number of general design goals that should beconsidered when optimizing relational operators for multi-corearchitectures. Then, we introduce the skyline operator and relatedalgorithms, including two recent algorithms optimized for multi-coreprocessors. Furthermore, we develop a novel skyline algorithm using anangle-based partitioning scheme originally developed for parallel anddistributed database management systems. Finally, we perform a number ofexperiments in order to evaluate and compare current shared-memory skylinealgorithms.Our experiments reveals some interesting results. Despite of having anexpensive pre-processing step, the angle-based algorithm is able tooutperform current best-performers for multi-core skyline computation.In fact, we are able to outperform competing algorithms by a factor of5 or more for anti-correlated datasets with moderate to largecardinalities. Included algorithms exhibit similar performancecharacteristics for independent datasets, while the more basicalgorithms excel at processing correlated datasets. We observe similarperformance for two small real-life datasets. Whereas, the angle-basedalgorithm is more efficient for a work-intensive real-life datasetcontaining more than 2M 5-dimensional tuples.Based on our results we propose that database research targeted atshared-memory systems is focused not only on basic algorithms but alsomore sophisticated techniques proven effective for parallel anddistributed database management systems. Additionally, we emphasizethat modern processors have very fast inter-thread communicationmechanisms that can be exploited to achieve parallel speedup also forsynchronization-heavy algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Behzadnia, Peyman. "Dynamic Energy-Aware Database Storage and Operations." Scholar Commons, 2018. http://scholarcommons.usf.edu/etd/7125.

Full text
Abstract:
Energy consumption has become a first-class optimization goal in design and implementation of data-intensive computing systems. This is particularly true in the design of database management systems (DBMS), which is one of the most important servers in software stack of modern data centers. Data storage system is one of the essential components of database and has been under many research efforts aiming at reducing its energy consumption. In previous work, dynamic power management (DPM) techniques that make real-time decisions to transition the disks to low-power modes are normally used to save energy in storage systems. In this research, we tackle the limitations of DPM proposals in previous contributions and design a dynamic energy-aware disk storage system in database servers. We introduce a DPM optimization model integrated with model predictive control (MPC) strategy to minimize power consumption of the disk-based storage system while satisfying given performance requirements. It dynamically determines the state of disks and plans for inter-disk data fragment migration to achieve desirable balance between power consumption and query response time. Furthermore, via analyzing our optimization model to identify structural properties of optimal solutions, a fast-solution heuristic DPM algorithm is proposed that can be integrated in large-scale disk storage systems, where finding the most optimal solution might be long, to achieve near-optimal power saving solution within short periods of computational time. The proposed ideas are evaluated through running simulations using extensive set of synthetic workloads. The results show that our solution achieves up to 1.65 times more energy saving while providing up to 1.67 times shorter response time compared to the best existing algorithm in literature. Stream join is a dynamic and expensive database operation that performs join operation in real-time fashion on continuous data streams. Stream joins, also known as window joins, impose high computational time and potentially higher energy consumption compared to other database operations, and thus we also tackle energy-efficiency of stream join processing in this research. Given that there is a strong linear correlation between energy-efficiency and performance of in-memory parallel join algorithms in database servers, we study parallelization of stream join algorithms on multicore processors to achieve energy efficiency and high performance. Equi-join is the most frequent type of join in query workloads and symmetric hash join (SHJ) algorithm is the most effective algorithm to evaluate equi-joins in data streams. To best of our knowledge, we are the first to propose a shared-memory parallel symmetric hash join algorithm on multi-core CPUs. Furthermore, we introduce a novel parallel hash-based stream join algorithm called chunk-based pairing hash join that aims at elevating data throughput and scalability. We also tackle parallel processing of multi-way stream joins where there are more than two input data streams involved in the join operation. To best of our knowledge, we are also the first to propose an in-memory parallel multi-way hash-based stream join on multicore processors. Experimental evaluation on our proposed parallel algorithms demonstrates high throughput, significant scalability, and low latency while reducing the energy consumption. Our parallel symmetric hash join and chunk-based pairing hash join achieve up to 11 times and 12.5 times more throughput, respectively, compared to that of state-of-the-art parallel stream join algorithm. Also, these two algorithms provide up to around 22 times and 24.5 times more throughput, respectively, compared to that of non-parallel (sequential) stream join computation where there is one processing thread.
APA, Harvard, Vancouver, ISO, and other styles
10

Tomé, Diego Gomes. "A near-data select scan operator for database systems." reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/53293.

Full text
Abstract:
Orientador : Eduardo Cunha de Almeida
Coorientador : Marco Antonio Zanata Alves
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 21/12/2017
Inclui referências : p. 61-64
Resumo: Um dos grandes gargalos em sistemas de bancos de dados focados em leitura consiste em mover dados em torno da hierarquia de memória para serem processados na CPU. O movimento de dados é penalizado pela diferença de desempenho entre o processador e a memória, que é um problema bem conhecido chamado memory wall. O surgimento de memórias inteligentes, como o novo Hybrid Memory Cube (HMC), permitem mitigar o problema do memory wall executando instruções em chips de lógica integrados a uma pilha de DRAMs. Essas memórias possuem potencial para computação de operações de banco de dados direto em memória além do armazenamento de bancos de dados. O objetivo desta dissertação é justamente a execução do operador algébrico de seleção direto em memória para reduzir o movimento de dados através da memória e da hierarquia de cache. O foco na operação de seleção leva em conta o fato que a leitura de colunas a serem filtradas movem grandes quantidades de dados antes de outras operações como junções (ou seja, otimização push-down). Inicialmente, foi avaliada a execução da operação de seleção usando o HMC como uma DRAM comum. Posteriormente, são apresentadas extensões à arquitetura e ao conjunto de instruções do HMC, chamado HMC-Scan, para executar a operação de seleção próximo aos dados no chip lógico do HMC. Em particular, a extensão HMC-Scan tem o objetivo de resolver internamente as dependências de instruções. Contudo, nós observamos que o HMC-Scan requer muita interação entre a CPU e a memória para avaliar a execução de filtros de consultas. Portanto, numa segunda contribuição, apresentamos a extensão arquitetural HIPE-Scan para diminuir esta interação através da técnica de predicação. A predicação suporta a avaliação de predicados direto em memória sem necessidade de decisões da CPU e transforma dependências de controle em dependências de dados (isto é, execução predicada). Nós implementamos a operação de seleção próximo aos dados nas estratégias de execução de consulta orientada a linha/coluna/vetor para a arquitetura x86 e para nas duas extensões HMC-Scan e HIPE-Scan. Nossas simulações mostram uma melhora de desempenho de até 3.7× para HMC-Scan e 5.6× para HIPE-Scan quando executada a consulta 06 do benchmark TPC-H de 1 GB na estratégia de execução orientada a coluna. Palavras-chave: SGBD em Memória, Cubo de Memória Híbrido, Processamento em Memória.
Abstract: A large burden of processing read-mostly databases consists of moving data around the memory hierarchy rather than processing data in the processor. The data movement is penalized by the performance gap between the processor and the memory, which is the well-known problem called memory wall. The emergence of smart memories, as the new Hybrid Memory Cube (HMC), allows mitigating the memory wall problem by executing instructions in logic chips integrated to a stack of DRAMs. These memories can enable not only in-memory databases but also have potential for in-memory computation of database operations. In this dissertation, we focus on the discussion of near-data query processing to reduce data movement through the memory and cache hierarchy. We focus on the select scan database operator, because the scanning of columns moves large amounts of data prior to other operations like joins (i.e., push-down optimization). Initially, we evaluate the execution of the select scan using the HMC as an ordinary DRAM. Then, we introduce extensions to the HMC Instruction Set Architecture (ISA) to execute our near-data select scan operator inside the HMC, called HMC-Scan. In particular, we extend the HMC ISA with HMC-Scan to internally solve instruction dependencies. To support branch-less evaluation of the select scan and transform control-flow dependencies into data-flow dependencies (i.e., predicated execution) we propose another HMC ISA extension called HIPE-Scan. The HIPE-Scan leads to less iteration between processor and HMC during the execution of query filters that depends on in-memory data. We implemented the near-data select scan in the row/column/vector-wise query engines for x86 and two HMC extensions, HMC-Scan and HIPE-Scan achieving performance improvements of up to 3.7× for HMC-Scan and 5.6× for HIPE-Scan when executing the Query-6 from 1 GB TPC-H database on column-wise. Keywords: In-Memory DBMS, Hybrid Memory Cube, Processing-in-Memory.
APA, Harvard, Vancouver, ISO, and other styles
11

Dai, Jing. "Efficient Concurrent Operations in Spatial Databases." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28987.

Full text
Abstract:
As demanded by applications such as GIS, CAD, ecology analysis, and space research, efficient spatial data access methods have attracted much research. Especially, moving object management and continuous spatial queries are becoming highlighted in the spatial database area. However, most of the existing spatial query processing approaches were designed for single-user environments, which may not ensure correctness and data consistency in multiple-user environments. This research focuses on designing efficient concurrent operations on spatial datasets. Current multidimensional data access methods can be categorized into two types: 1) pure multidimensional indexing structures such as the R-tree family and grid file; 2) linear spatial access methods, represented by the Space-Filling Curve (SFC) combined with B-trees. Concurrency control protocols have been designed for some pure multidimensional indexing structures, but none of them is suitable for variants of R-trees with object clipping, which are efficient in searching. On the other hand, there is no concurrency control protocol designed for linear spatial indexing structures, where the one-dimensional concurrency control protocols cannot be directly applied. Furthermore, the recently designed query processing approaches for moving objects have not been protected by any efficient concurrency control protocols. In this research, solutions for efficient concurrent access frameworks on both types of spatial indexing structures are provided, as well as for continuous query processing on moving objects, for multiple-user environments. These concurrent access frameworks can satisfy the concurrency control requirements, while providing outstanding performance for concurrent queries. Major contributions of this research include: (1) a new efficient spatial indexing approach with object clipping technique, ZR+-tree, that outperforms R-tree and R+-tree on searching; (2) a concurrency control protocol, GLIP, to provide high throughput and phantom update protection on spatial indexing with object clipping; (3) efficient concurrent operations for indices based on linear spatial access methods, which form up the CLAM protocol; (4) efficient concurrent continuous query processing on moving objects for both R-tree-based and linear spatial indexing frameworks; (5) a generic access framework, Disposable Index, for optimal location update and parallel search.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Andreev, Maxim. "Operations on text in a database programming language." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0017/MQ55034.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chiama, Jared Alungo. "Text operations for a relational database programming language." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22726.

Full text
Abstract:
In this thesis we introduce a method of storing static text data, and algorithms for operations on the data, in a relational database programming language.
We introduce a text data type, and implement relational algebra operations on the text data type. Our method stores the text data in text files external to the relations, and maintains pointers to the text data within the relations. Our algorithms minimize accesses to the actual text data so as to maintain the efficiency of database operations.
We also implement a text processing mechanism where a text script can be joined to a relation, producing an individualized text script for each tuple in the relation. Our implementation includes queries involving pattern searches within text attribute values.
All operations on text data are relational algebra operations, requiring the text data to be in relations, and returning relations as results.
APA, Harvard, Vancouver, ISO, and other styles
14

Filippi, Stephen Charles. "Implementing relational operations in an object-oriented database." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Valdivia, Martinez Angélica. "Implementing of G.I.S. spatial operations in a database system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0007/MQ44308.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Valdivia, Martínez Angélica. "Implementing of G.I.S. spatial operations in a database system." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20887.

Full text
Abstract:
This thesis presents a set of basic spatial operations implemented in a relational programming language. Our aim in undertaking this work was to demonstrate that a general purpose database system offers the needed flexibility for developing independent applications to manipulate spatial data. Thus, we exploited an existing Relix system by using both relational algebra functions and a relational database model.
Effectiveness, rather than high performance, is the central issue in this work. Of the two aspects (spatial and descriptive) of geographical data, we address only the spatial component. A spatial operation manipulates geometric data objects such as points, lines, and polygons. A relational data model for storing graphic data, and the spatial operations to manipulate them, are needed. For this purpose, some data models together with manipulation techniques are analyzed. We designed and implemented computational geometry vector-based algorithms such as measurements, calculations, buffers, and overlays for two-dimensional objects using relational algebra.
We also consider the important issue of interaction with commercial systems. We used other GIS such as ARC/INFO and MapInfo for data entry and display of results. Moreover, we developed C routines to communicate Relix with the GIS.
We document the usage of each operation and the relational algebra routines. We also provide examples which illustrate the operations. We conclude that the relational algebra can be effectively applied to produce spatial operations in a unified system.
APA, Harvard, Vancouver, ISO, and other styles
17

Sadeghi, R. "A database query language for operations on historical data." Thesis, University of Abertay Dundee, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wakelin, Andrew. "A database query language for operations on graphical objects." Thesis, Abertay University, 1988. https://rke.abertay.ac.uk/en/studentTheses/826893af-0377-4ec6-a09a-6a5bd246df28.

Full text
Abstract:
The motivation for this work arose from the recognised inability of relational databases to store and manipulate data that is outside normal commercial applications (e.g. graphical data). The published work in this area is described with respect to the major problems of representation and manipulation of complex data. A general purpose data model, called GDB, that sucessfully tackles these major problems is developed from a formal specification in ML and is implemented using the PRECI/C database system. This model uses three basic graphical primitives (line segments, plane surfaces - facets, and volume elements tetrons) to construct graphical objects and it is shown how user designed primitives can be included. It is argued that graphical database query languages should be designed to be application specific and the user should be protected from the relational algebra which is the basis of the database operations. Such a base language (an extended version of DEAL) is presented which is capable of performing the necessary graphical manipulation by the use of recursive functions and views. The need for object hierarchies is established and the power of the DEAL language is shown to be necessary to handle such complex structures. The importance of integrity constraints is discussed and some ideas for the provision of user defined constraints are put forward.
APA, Harvard, Vancouver, ISO, and other styles
19

Madipally, Sunil veer Kumar. "Simulation of Sawmill Yard Operations Using Software Agents." Thesis, Högskolan Dalarna, Datateknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6026.

Full text
Abstract:
Bergkvist insjön AB is a sawmill yard which is capable of producing 350,000 cubic meter of timber every year this requires lot of internal resources. Sawmill operations can be classified as unloading, sorting, storage and production of timber. In the company we have trucks arriving at random they have to be unloaded and sent back at the earliest to avoid queuing up of trucks creating a problem for truck owners. The sawmill yard has to operate with two log stackers that does several tasks including transporting the logs from trucks to measurement station where the logs will be sorted into classes and dropped into pockets from pockets to the sorted timber yard where they are stored and finally from there to sawmill for final processing. The main issue that needs to be answered here is the lining up trucks that are waiting to be unload, creating a problem for both sawmill as well as the truck owners and given huge production volume, it is certain that handling of resources is top priority. A key challenge in handling of resources would be unloading of trucks and finding a way to optimize internal resources.To address this problem i have experimented on different ways of using internal resources, i have designed different cases, in case 1 we have both the log stackers working on sawmill and measurement station. The main objective of having this case is to make sawmill and measurement station to work all the time. Then in case 2, i have divided the work between both the log stackers, one log stacker will be working on sawmill and pocket_control and second log stacker will be working on measurement station and truck. Then in case 3 we have only one log stacker working on all the agents, this case was designed to reduce cost of production, as the experiment cannot be done in real-time due to operational cost, for this purpose simulation is used, preliminary investigation into simulation results suggested that case 2 is the best option has it reduced waiting time of trucks considerably when compared with other cases and it showed 50% increase in optimizing internal resources.
APA, Harvard, Vancouver, ISO, and other styles
20

Antony, Solomon Raj. "Design and evaluation of a consulting system for database design." FIU Digital Commons, 1997. http://digitalcommons.fiu.edu/etd/1293.

Full text
Abstract:
Database design is a difficult problem for non-expert designers. It is desirable to assist such designers during the problem solving process by means of a knowledge based (KB) system. Although a number of prototype KB systems have been proposed, there are many shortcomings. Firstly, few have incorporated sufficient expertise in modeling relationships, particularly higher order relationships. Secondly, there does not seem to be any published empirical study that experimentally tested the effectiveness of any of these KB tools. Thirdly, problem solving behavior of non-experts, whom the systems were intended to assist, has not been one of the bases for system design. In this project, a consulting system, called CODA, for conceptual database design that addresses the above short comings was developed and empirically validated. More specifically, the CODA system incorporates (a) findings on why non-experts commit errors and (b) heuristics for modeling relationships. Two approaches to knowledge base implementation were used and compared in this project, namely system restrictiveness and decisional guidance (Silver 1990). The Restrictive system uses a proscriptive approach and limits the designer's choices at various design phases by forcing him/her to follow a specific design path. The Guidance system approach, which is less restrictive, involves providing context specific, informative and suggestive guidance throughout the design process. Both the approaches would prevent erroneous design decisions. The main objectives of the study are to evaluate (1) whether the knowledge-based system is more effective than the system without a knowledge-base and (2) which approach to knowledge implementation - whether Restrictive or Guidance - is more effective. To evaluate the effectiveness of the knowledge base itself, the systems were compared with a system that does not incorporate the expertise (Control). An experimental procedure using student subjects was used to test the effectiveness of the systems. The subjects solved a task without using the system (pre-treatment task) and another task using one of the three systems, viz. Control, Guidance or Restrictive (experimental task). Analysis of experimental task scores of those subjects who performed satisfactorily in the pre-treatment task revealed that the knowledge based approach to database design support lead to more accurate solutions than the control system. Among the two KB approaches, Guidance approach was found to lead to better performance when compared to the Control system. It was found that the subjects perceived the Restrictive system easier to use than the Guidance system.
APA, Harvard, Vancouver, ISO, and other styles
21

Ault, William R. "Design and implementation of an operations module for the ARGOS paperless ship system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/26969.

Full text
Abstract:
Approved for public release; distribution in unlimited.
The "paperless' ship is an idea wvhich has been advocated at the highest levels in the Navy. The goal is to eliminate the enormous amount of paper required in the normal operation of a modern naval warship. The ARGOS system under development at the Naval Postgraduate school is a prototype solution which uses HyperCard/HyperTalk for prototype development. The operations functional area, including sections for training, scheduling, message generation, and publication management is an important part of this development.
http://archive.org/details/designimplementa00ault
APA, Harvard, Vancouver, ISO, and other styles
22

Schuh, Stefan [Verfasser], and Jens [Akademischer Betreuer] Dittrich. "Understanding fundamental database operations on modern hardware / Stefan Schuh. Betreuer: Jens Dittrich." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2015. http://d-nb.info/1080672966/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gaudon, Melanie E. "Extensions to Aldat to support distributed database operations with no global scheme." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Miller, Nathan D. "Adapting the Skyline Operator in the NetFPGA Platform." Youngstown State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1369586333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Raymond, Jonathan D. "Determining the number of reenlistments necessary to satisfy future force requirements." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FRaymond.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Ronald D. Fricker. "September 2006." Includes bibliographical references (p. 37). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
26

Davis, Robert M. "Web-enabled database application for Marine Aviation Logistics Squadrons an operations and sustainment prototype." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FDavis.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Magdi N. Kamel. "September 2006." Includes bibliographical references (p. 91-92). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
27

Howard, Stephen P. "Special Operations Forces and Unmanned Aerial Vehicles Sooner or Later? /." Maxwell AFB, Ala. : Air University Research Coordinator Office, 1998. http://www.au.af.mil/au/database/research/ay1995/saas/howardsp.htm.

Full text
Abstract:
Thesis (M.M.A.S.)--School of Advanced Airpower Studies, 1995.
Subject: An analysis of whether Special Operations Forces should use Unmanned Aerial Vehicles to support intelligence, surveillance, reconnaissance, communications and re-supply capability deficiencies. Cover page date: June 1995. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
28

Jones, Eric Scott. "Hastening Write Operations on Read-Optimized Out-of-Core Column-Store Databases Utilizing Timestamped Binary Association Tables." Youngstown State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1433969530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ferreira, TonI (Toni Jolene). "A design methodology for the user interface of an electromechanical parts database." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81720.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Global Operations Program at MIT, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 44).
In an increasingly complex supply chain, the use of a structured methodology for locating applicable existing parts during the design process can help a large-volume manufacturer to encourage the reuse of components already in inventory, rather than source new ones. This reuse can dramatically reduce the speed at which the database grows in complexity and can prevent unnecessary escalation of inventory levels. It can also serve to increase the order volume of a smaller number of electromechanical components and reduce the cost and delivery time of new products in development. The use of an internal search tool to facilitate the design process will also encourage engineers to make design decisions that benefit the larger organization. This thesis proposes a design methodology for a web-based search tool aimed at reducing unnecessary new part creation in a component database. Included is a proposed set of features to be implemented in the software tool to assist engineers in locating, reviewing and utilizing relevant existing parts quickly, as well as suggestions for integrating this tool into the standard engineering workflow. The goal will be to encourage the reuse of parts in inventory and prevent unjustified proliferation in the database.
by Toni J. Ferreira.
M.B.A.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Alm, Robert, and Lavdim Imeri. "A performance comparison between graph databases : Degree project about the comparisonbetween Neo4j, GraphDB and OrientDB on different operations." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-22376.

Full text
Abstract:
In this research we study what is the theoretical complexity of Neo4J, OrientDB and GraphDB, (three known Graph Databases that can be accessed by a Java instance), and how this complexity is manifested in a real life performance, To study their practical performance, a software was implemented and named as a profiler, which is capable to profile, (to record the time that is needed), each operation, and display the results in an accurate and organized manner. The technical documentation of those 3 databases was reviewed as well, to identify how the databases work, and what are their strong and weak points. By the profiling process, the best performance was displayed by Neo4J, and while OrientDB failed to deliver, GraphDB takes the second place in terms of performance. We can identify a potential in OrientDB’s approach, but its structure is too complex and rigid. Neo4J has a robust structure and an architecture that gives to it a great performance, while the Cypher syntax, which Neo4J uses, minimizes the possibility of human error. GraphDB is optimized for large scale public-data operations but performs well as a stand-alone solution as well.

An important part of this publication is its GitHub Repository

https://github.com/Exarchias/graph-databases-profiler

APA, Harvard, Vancouver, ISO, and other styles
31

Tubbs, James O. "Beyond Gunboat Diplomacy Forceful Applications of Airpower In Peace Enforcement Operations /." Maxwell AFB, Ala. : Air University Research Coordinator Office, 1998. http://www.au.af.mil/au/database/research/ay1995/saas/tubbsjo.htm.

Full text
Abstract:
Thesis (M.M.A.S.)--School of Advanced Airpower Studies, 1995.
Subject: The application of airpower to peace enforcement operations. Cover page date: June 1995. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
32

Webster, Linda Carol. "City of Redlands Public Works Department: Call log database study." CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

HAMIDREZA, AFZALI SEYED. "Consistent Range-Queries in DistributedKey-Value Stores : Providing Consistent Range-Query Operations for the CATS NoSQL Database." Thesis, KTH, Programvaru- och datorsystem, SCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121278.

Full text
Abstract:
Big Data is data that is too large for storage in traditional relationaldatabases. Recently, NoSQL databases have emerged as a suitable platformfor the storage of Big Data. Most of them, such as Dynamo, HBase, andCassandra, sacrifice consistency for scalability. They provide eventual dataconsistency guarantees, which, can make the application logic complicated fordevelopers. In this master thesis project we use CATS; a scalable and partitiontolerant Key-Value store offering strong data consistency guarantees. Itmeans that the value read is, in some sense, the latest value written. We havedesigned and evaluated a lightweight range-query mechanism for CATS, that,provides strong consistency for all returned data items. Our solution reuses themechanism already available in CATS for data consistency. Using this solutionCATS can guarantee strong data consistency for both lookup queries andrange-queries. This enables us to build new classes of applications using CATS.Our range-query solution has been used to build a high level data model, whichsupports secondary indexes, on top of CATS.
APA, Harvard, Vancouver, ISO, and other styles
34

Kannan, Govindan. "A Methodology for the Development of a Production Experience Database for Earthmoving Operations Using Automated Data Collection." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/28065.

Full text
Abstract:
Automated data acquisition has revolutionized the reliability of product design in recent years. A noteworthy example is the improvement in the design of aircrafts through field data. This research proposes a similar improvement in the reliability of process design of earthmoving operations through automated field data acquisition. The segment of earthmoving operations addressed in this research constitutes the truck-loader operation. Therefore, the applicability of this research extends to other industries involving truck-operation such as mining, agriculture and forest logging and is closely related to wheel-based earthmoving operations such as scrapers. The context of this research is defined by data collection needed to increase the validity of the results obtained by analysis tools such as simulation, performance measures and graphical representation of variance in an activity's performance, and the relation between operating conditions and the variance in an activity's performance. The automated cycle time data collection is facilitated by instrumented trucks and the collection of information on operating conditions is facilitated by image database and paper forms. The cycle time data and the information on operating conditions are linked together to form the experience database. This research developed methods to extract, quantify and understand the variation in each component of the earthmoving cycle namely, load, haul and return, and dump activities. For the load activity, the simultaneous variation in payload and load time is illustrated through the development of a PLT (PayLoad Time) Map. Among the operating conditions, material type, load area floor, space constraints and shift are investigated. A dynamic normalization process of determining the ratio of actual travel time to expected travel time is developed for the haul and return activities. The length of the haul road, sequence of gear downshifts and shift are investigated for their effect on the travel time. The discussion on the dump activity is presented in a qualitative form due to the lack of data. Each component is integrated within the framework of the experience database. The implementation aspects with respect to developing and using the experience database are also described in detail. The practical relevance of this study is highlighted using an example.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Walker, Daniel R. "The Organization and Training of Joint Task Forces." Maxwell AFB, Ala. : Air University Research Coordinator Office, 1998. http://www.au.af.mil/au/database/research/ay1995/saas/walkerdr.htm.

Full text
Abstract:
Thesis (M.M.A.S.)--School of Advanced Airpower Studies, 1995.
Subject: Examines the organization, training, doctrine, and experience of joint task forces within each of the five geographically tasked unified commands. Cover page date: June 1995. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
36

Powelson, Steven E. "An Examination of Small Businesses' Propensity to Adopt Cloud-Computing Innovation." ScholarWorks, 2011. https://scholarworks.waldenu.edu/dissertations/990.

Full text
Abstract:
The problem researched was small business leaders' early and limited adoption of cloud computing. Business leaders that do not use cloud computing may forfeit the benefits of its lower capital costs and ubiquitous accessibility. Anchored in a diffusion of innovation theory, the purpose of this quantitative cross-sectional survey study was to examine if there is a relationship between small business leaders' view of cloud-computing attributes of compatibility, complexity, observability, relative advantage, results demonstrable, trialability, and voluntariness and intent to use cloud computing. The central research question involved understanding the extent to which each cloud-computing attribute relate to small business leaders' intent to use cloud computing. A sample of 3,897 small business leaders were selected from a commerce authority e-mail list yielding 151 completed surveys that were analyzed using regression. Significant correlations were found for the relationships between the independent variables of compatibility, complexity, observability, relative advantage, and results demonstrable and the dependent variable intent to use cloud computing. However, no significant correlation was found between the independent variable voluntariness and intent to use. The findings might provide new insights relating to cloud-computing deployment and commercialization strategies for small business leaders. Implications for positive social change include the need to prepare for new skills for workers affected by cloud computing adoption and cloud-computing ecosystem's reduced environmental consequences and policies.
APA, Harvard, Vancouver, ISO, and other styles
37

Mobley, Frederick Leonard. "Behavioral Operations Management in Federal Governance." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/1570.

Full text
Abstract:
The environmental uncertainty of federal politics and acquisition outsourcing in competitive markets requires an adaptive decision-analysis structure. Practitioners oriented toward exclusively static methods face severe challenges in understanding qualitative aspects of organizational governance. The purpose of this grounded theory study was to examine and understand behavioral relationship attributes within intuitive, choice, judgment, or preference decision-making processes. The problem addressed in this study was the detrimental effects of organizational citizenship behavior (OCB), compulsory citizenship behavior (CCB), and social exchange theory (SET) on the acquisition management relationship The OCB, CCB, SET dictates that sound business development, relationship acumen, emotional intelligence and perceptiveness transcend pure numerical quantification. Exhibition of relationship-based attributes influence and drive long-term contractual relationships and the sustainability of business organizations. The data collected included historical data and survey responses. Approximately 34,000 acquisition professionals comprised the population-sampling frame. The study sample consisted of 378 survey responses that yielded 294 qualifying respondents with 94 disqualifications that produced a 78% response rate. The Carnegie-Mellon behavioral survey guidelines underpinned questionnaire construction and affirmation of themes. Strauss and Corbin grounded theory and theme generation addressed behavioral decision making under the additive model that inform the development of an organizational social operations and business framework that accounts for intuitive judgment. The study may contribute to positive social change by orienting managers toward behavioral decision making, ensuring responsiveness to the public and federal governance
APA, Harvard, Vancouver, ISO, and other styles
38

Tinnefeld, Christian [Verfasser], and Hasso [Akademischer Betreuer] Plattner. "Building a columnar database on shared main memory-based storage : database operator placement in a shared main memory-based storage system that supports data access and code execution / Christian Tinnefeld ; Betreuer: Hasso Plattner." Potsdam : Universität Potsdam, 2014. http://d-nb.info/1218398442/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ahciarliu, Cantemir M. "Multi-agent architecture for integrating remote databases and expert sources with situational awareness tools : humanitarian operations scenario /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FAhciarliu.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Alex Bordetsky, Glenn Cook. Includes bibliographical references (p. 77-79). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
40

Can, Yuksel. "Design, implementation, and analysis of the Personnel, Operations, Equipment, and Training (POET) database and application program for the Turkish Navy Frigate." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA378657.

Full text
Abstract:
Thesis (M.S. in Computer Science and M.S. in Systems Management) Naval Postgraduate School, March 2000.
Thesis advisor(s): Wu, C. Thomas; Edwards, Lee. "March 2000." Includes bibliographical references (p. 429-430). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
41

Gualtier, Kenneth. "Information Operations Under International Law: A Delphi Study Into the Legal Standing of Cyber Warfare." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/320.

Full text
Abstract:
The ever-growing interconnectivity of industry and infrastructure through cyberspace has increased their vulnerability to cyber attack. The lack of any formal codification of cyber warfare has led to the development of contradictory state practices and disagreement as to the legal standing of cyber warfare, resulting in an increased risk of damage to property and loss of life. Using the just war theory as a foundation, the research questions asked at the point at which cyber attacks meet the definition of use of force or armed attack under international law and what impediments currently exist in the development of legal limitations on cyber warfare. The research design was based on using the Delphi technique with 18 scholars in the fields of cyber warfare and international law for 3 rounds of questioning to reach a consensus of opinion. The study employed qualitative content analysis of survey questions during the first round of inquiry in order to create the questions for the 2 subsequent rounds. The first round of inquiry consisted of a questionnaire composed of 9 open-ended questions. These data were inductively coded to identify themes for the subsequent questionnaires that consisted of 42 questions that allowed the participants to rank their responses on a Likert-type scale and contextualize them using written responses. Participants agreed that a computer attack is comparable to the use of force or armed attack under international law, but fell short of clearly defining the legal boundaries of cyber warfare. This study contributes to social change by providing informed opinions by experts about necessary legal reforms and, therefore, provides a basis for greater legal protections for life and property.
APA, Harvard, Vancouver, ISO, and other styles
42

Savino, Sandro. "A solution to the problem of the cartographic generalization of Italian geographical databases at large-medium scales: approach definition, process design and operators implementation." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421671.

Full text
Abstract:
During many years in which the generalization of cartographic data has been studied many developments have been achieved. As some national mapping agencies in Europe and in the world are beginning to introduce automated processes in their production lines, the original dream of a completely automated system that could perform generalization is getting closer, even though it has not been reached yet. The aim of this dissertation is to investigate whether it is possible to design and implement a working generalization process for the Italian large-medium scale geographical databases. In this thesis we argue that the models, the approaches and the algorithms developed so far provide a robust and sound base to the problem of automated cartographic generalization, but that to build an effective generalization process it is necessary to deal with all the small details deriving from the actual implementation of the process on defined scales and data models of input and output. We propose/speculate that our goal can be reached by capitalizing the research results achieved so far and customizing the process on the data models and scales treated. This is the approach at the basis of this research work: the design of the cartographic generalization process and the algorithms implemented, either developed from scratch or deriving from previous works, have all been customized to solve a well defined problem: i.e. they expect input data that comply to a consistent data model and are tailored to obtain the results at defined scale and data model. This thesis explains how this approach has been brought into practice in the frame of the CARGEN project that aims at the development of a complete cartographic process to generalize the Italian medium scale geographical databases at 1:25000 and 1:50000 scale from the official Italian large scale geographical database at 1:5000 scale. This thesis will focus on the generalization to the 1:25000 scale, describing the approach that has been adopted, the overall process that has been designed and will provide details on the most important operators implemented for the generalization at such scale.
Questa tesi di dottorato sviluppa la problematica della generalizzazione cartografica applicata ai database geografici italiani alla alta e media scala. Il lavoro di ricerca si è sviluppato all'interno del progetto CARGEN, un progetto di ricerca tra l'Università di Padova e la Regione Veneto, con la collaborazione dell'IGMI per lo sviluppo di una procedura automatica di generalizzazione del database DB25 IGMI in scala 1:25000 a partire dal database regionale GeoDBR in scala 1:5000. Il lavoro di tesi affronta tutte le tematiche relative al processo di generalizzazione, partendo dalla generalizzazione del modello fino alla descrizione degli algoritmi di generalizzazione delle geometrie.
APA, Harvard, Vancouver, ISO, and other styles
43

Strednansky, Susan E. "Balancing the Trinity the Fine Art of Conflict Termination /." Maxwell AFB, Ala. : Air University Research Coordinator Office, 1998. http://www.au.af.mil/au/database/research/ay1995/saas/strednse.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Paulíček, Martin. "Ladění výkonnosti databází." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237021.

Full text
Abstract:
The objective of this thesis was to study problems of an insufficient database processing performance and possibilities how to improve the performance with database configuration file optimizations, more powerful hardware and parallel processing. The master thesis contains a description of relational databases, storage media and different forms of parallelism with its use in database systems. There is a description of the developed software for testing database performance. The program was used for testing several database configuration files, various hardware, different database systems (PostgreSQL, Oracle) and advantages of parallel method "partitioning". Test reports and evaluation results are described at the end of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
45

Shugg, Charles K. "Planning Airpower Strategies Enhancing the Capability of Air Component Command Planning /." Maxwell AFB, Ala. : Air University Research Coordinator Office, 1998. http://www.au.af.mil/au/database/research/ay1995/saas/shuggck.htm.

Full text
Abstract:
Thesis (M.M.A.S.)--School of Advanced Airpower Studies, 1995.
Subject: This study attempts to determine whether Air Component Commands are capable of developing effective airpower strategy. Cover page date: June 1995. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
46

Crochetiere, Bruce. "Transcending Technological Innovation: The Impact of Acquisitions on Entrepreneurial Technical Organizations." ScholarWorks, 2011. https://scholarworks.waldenu.edu/dissertations/956.

Full text
Abstract:
Technology firms with substantial cash reserves acquire smaller entrepreneurial firms for diversification. In 2006, 3 large firms acquired 28 organizations, with the combined deals exceeding {dollar}4.7 billion. The problem addressed in this study is that new start-up companies with innovative ideas may not mature when they are acquired by larger companies and do not fully articulate potential industry-transcending innovation. This is important because the unsuccessful integration of an acquisition can dismantle innovation and compromises economic inventiveness. Drawing from the disruptive innovation and the resource-based theories, the purpose of the quasi-experimental study was to examine the impact of acquisition by larger public technological organizations of smaller start-up innovative entrepreneurial organizations on patent generation, stock price trend, and stakeholder retention. The research questions in this study were designed to statistically test pre/post changes in these key innovation performance factors before and after an acquisition. Historical data on 71 acquisitions by 10 acquiring firms were gathered related to number of patents generated, stock price trends, and stakeholder retention. Paired t tests were used to confirm that there were significantly fewer patents and patents per year generated, and significantly fewer stakeholders retained after acquisition. Stock price fluctuation was examined using a cumulative abnormal return categorization approach that indicated only 31% of the acquired companies realized gains that reached the a priori threshold of significance. The results of this study could create positive social change through the development of business acquisition strategies that promote innovation, resulting in economic prosperity for the United States.
APA, Harvard, Vancouver, ISO, and other styles
47

Ladeinde, Olurotimi Adeboye. "An Empirical Study on User Acceptance of Simulation Techniques for Business Process." ScholarWorks, 2011. https://scholarworks.waldenu.edu/dissertations/911.

Full text
Abstract:
Non acceptance of technology may result in serious damages to organizations. For example, non acceptance of simulation technology cost Merrill Lynch Bank over {dollar}50 billion in 2008, while statistics in 2 separate studies showed that non acceptance of technology was responsible for a 57% decrease in performance level for physicians practicing in public tertiary hospitals in Hong Kong, and a 39% decrease in productivity for hotel workers in Seoul, Korea. The problem addressed in this research was non acceptance of simulation technology by project managers. This research investigated the correlation among personal innovativeness, organizational innovativeness, perceived usefulness, perceived ease of use, and intention to use simulation techniques by members of the Project Management Institute (PMI). The theory of reasoned action (TRA) and the extended technology acceptance model (TAM) served as the theoretical foundations for the study. In this quantitative, correlational survey study, data were obtained from a random sample of the PMI membership. Simple regression analysis was used to address research questions. Results indicate significant correlations of moderate strength among usefulness, innovativeness, ease of use, and intention to use simulation technology. The study contributes to positive social change by identifying factors that help companies to improve their business processes, generate more profits, create jobs, and make positive contributions to the communities in which they are located.
APA, Harvard, Vancouver, ISO, and other styles
48

Doherty, Michael J. "Using Organizational, Coordination, and Contingency Theories to Examine Project Manager Insights on Agile and Traditional Success Factors for Information Technology Projects." ScholarWorks, 2011. https://scholarworks.waldenu.edu/dissertations/944.

Full text
Abstract:
Two dominant research views addressing disappointing success rates for information technology (IT) projects suggest project success may depend on the presence of a large number of critical success factors or advocate for agile project management as an alternative to traditional practice. However, after two decades of research, success rates remain low, and the role of critical success factors or project management approach remains unclear. The purpose of this study was to use views of experienced project managers to explore the contribution of success factors and management approach to project success. Applying organizational, coordination, and contingency theories, the research questions examined IT project manager perceptions about success factors, how those success factors interrelate, and the role of management approach in project success. A Q methodology mixed method design was used to analyze subjective insights of project managers about the important critical success factors for IT projects. Two critical success factors emerged as important: a sustained commitment from upper management to the project and clear, measurable project goals and objectives. Three composite factors also surfaced representing the importance of people-project interactions, user/client involvement, and traditional project management tasks. The analyses found no broad support for agile project management and could not confirm principles of organizational or coordination theories as critical for project success. However, a contingent relationship might exist between some critical success factors and merits further investigation. Helping the project management community understand IT project success factors could improve project execution and reduce failure rates leading to sizeable savings for project clients.
APA, Harvard, Vancouver, ISO, and other styles
49

Williams, Gloria S. "Entropy in Postmerger and Acquisition Integration from an Information Technology Perspective." ScholarWorks, 2011. https://scholarworks.waldenu.edu/dissertations/1038.

Full text
Abstract:
Mergers and acquisitions have historically experienced failure rates from 50% to more than 80%. Successful integration of information technology (IT) systems can be the difference between postmerger success or failure. The purpose of this phenomenological study was to explore the entropy phenomenon during postmerger IT integration. To that end, a purposive sample of 14 midlevel and first-line managers in a manufacturing environment was interviewed to understand how the negative effects of entropy affect the ultimate success of the IT integration process. Using the theoretical framework of the process school of thought, interview data were iteratively examined by using keywords, phrases, and concepts; coded into groups and themes; and analyzed to yield results. The data indicated that negative entropy factors were associated with the postmerger integration process. Participants' perception of loss emerged as a central theme for employees from both sides of the merger. A majority of the participants perceived entropy in terms of loss similar to the loss of a family member. The findings may contribute to social change by providing a framework for merger integration managers to mitigate the negative effects of entropy and facilitate a successful IT integration outcome. Successful mergers increase shareholder value and customer satisfaction, which strengthen the company's financial condition. A financially stable company will be in a better position to provide a positive contribution to the surrounding community, offer stable employment opportunities, and sponsor corporate social responsibility programs.
APA, Harvard, Vancouver, ISO, and other styles
50

Ward, Terrence L. "Predicting inter -organizational knowledge satisfaction through knowledge conversion and task characteristics in a minority -owned business." ScholarWorks, 2009. https://scholarworks.waldenu.edu/dissertations/696.

Full text
Abstract:
Knowledge management has been extensively studied from the single organization (intra-organizational) perspective for many years. Although the literature on intra-organizational knowledge is extensive, there still exist gaps in the literature with regards to knowledge being shared by multiple organizations (inter-organizational knowledge). Inter-organizational knowledge satisfaction is gained when the organizations successfully embody the knowledge gained via the cooperation and crystallizes that knowledge within the organization. The problem addressed in this study is the lack of a model for predicting inter-organizational knowledge satisfaction utilizing task characteristics and the knowledge conversion process. The purpose of the study was to predict inter-organizational knowledge satisfaction for a contract company. The research question addressed how task characteristic and knowledge conversion can predict inter-organizational knowledge satisfaction. The theoretical frameworks include Nonaka's theory on organizational knowledge creation and Becerra-Fernandez and Sabherwal's theory for task characteristics. The study is a correlation research design using multiple linear regression as the data analysis method. An online questionnaire was administered to all executives, first- and mid-level managers, and professionals. The predictor variables task characteristic and knowledge conversion are used to predict inter-organizational knowledge satisfaction (IOKS). Predictor variables accounted for 35.3% of the variance in the IOKS score. This study contributes to social change by helping organizations gain a competitive advantage through developing and implementing both creative and timely knowledge management initiatives to gain inter-organizational knowledge satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography