Dissertationen zum Thema „Database management“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Database management.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Database management" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Alkahtani, Mufleh M. „Modeling relational database management systems“. Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/865955.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Almost all of the database products developed over the past few years are based on what is called the relational approach.The purpose of this thesis is to characterize a relational data base management system, we do this by studying the relational model in some depth.The relational model is not static, rather it has been evolving over time. We trace the evolution of the relational model. We will also consider the ramifications of the relational model for modern database systems.
Department of Computer Science
2

Beyers, Hector Quintus. „Database forensics : Investigating compromised database management systems“. Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/41016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The use of databases has become an integral part of modern human life. Often the data contained within databases has substantial value to enterprises and individuals. As databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity, negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics, database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS. PROBLEM STATEMENT The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the integrity of the forensic results. APPROACH The study divides the relational DBMS into four abstract layers, illustrates how the layers can be prepared to be either in a found or clean forensic state, and experimentally combines the prepared layers of the DBMS according to the forensic scenario. The study commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore, each layer must be studied individually in a forensic context before all four layers are considered collectively. A forensic study is conducted on each abstract layer of the DBMS that has the potential to influence other layers to deliver incorrect results. Ultimately, the DBMS will be used as a forensic tool to extract evidence from its own encrypted data and data structures. Therefore, the last chapter shall illustrate how a forensic investigator can prepare a trustworthy forensic environment where a forensic investigation could be conducted on an entire PostgreSQL DBMS by constructing a combination of the appropriate forensic states of the abstract layers. RESULTS The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of various states of abstract layers in the DBMS. Approaches are suggested on how to deal with a forensic query on the data model, data dictionary and application schema layer of the DBMS. A forensic process is suggested on how to prepare the DBMS to extract evidence from the DBMS. Another function of this study is that it advises forensic investigators to consider alternative possibilities on how the DBMS could be attacked. These alternatives might not have been considered during investigations on DBMSs to date. Our methods have been tested at hand of a practical example and have delivered promising results.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
3

McCormack, David. „Risk management database application“. Thesis, Cardiff University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321367.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Hao. „Querying databases a tale of two C# approaches /“. Click here for download, 2010. http://proquest.umi.com.ps2.villanova.edu/pqdweb?did=2019786971&sid=1&Fmt=2&clientId=3260&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sun, Jimeng. „Analysis of predictive spatio-temporal queries /“. View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20SUN.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 62-65). Also available in electronic version. Access restricted to campus users.
6

Fredstam, Marcus, und Gabriel Johansson. „Comparing database management systems with SQLAlchemy : A quantitative study on database management systems“. Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155648.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Knowing which database management system to use for a project is difficult to know in advance. Luckily, there are tools that can help the developer apply the same database design on multiple different database management systems without having to change the code. In this thesis, we investigate the strengths of SQLAlchemy, which is an SQL toolkit for Python. We compared SQLite, PostgreSQL and MySQL using SQLAlchemy as well as compared a pure MySQL implementation against the results from SQLAlchemy. We conclude that, for our database design, PostgreSQL was the best database management system and that for the average SQL-user, SQLAlchemy is an excellent substitution to writing regular SQL.
7

Lo, Chi Lik Eric. „Test automation for database management systems and database applications /“. Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17271.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sullivan, Larry. „Performance issues in mid-sized relational database machines /“. Online version of thesis, 1989. http://hdl.handle.net/1850/10445.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Khayundi, Peter. „A comparison of open source object-oriented database products“. Thesis, University of Fort Hare, 2009. http://hdl.handle.net/10353/254.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Object oriented databases have been gaining popularity over the years. Their ease of use and the advantages that they offer over relational databases have made them a popular choice amongst database administrators. Their use in previous years was restricted to business and administrative applications, but improvements in technology and the emergence of new, data-intensive applications has led to the increase in the use of object databases. This study investigates four Open Source object-oriented databases on their ability to carry out the standard database operations of storing, querying, updating and deleting database objects. Each of these databases will be timed in order to measure which is capable of performing a particular function faster than the other.
10

Bhasker, Bharat. „Query processing in heterogeneous distributed database management systems“. Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39437.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The goal of this work is to present an advanced query processing algorithm formulated and developed in support of heterogeneous distributed database management systems. Heterogeneous distributed database management systems view the integrated data through an uniform global schema. The query processing algorithm described here produces an inexpensive strategy for a query expressed over the global schema. The research addresses the following aspects of query processing: (1) Formulation of a low level query language to express the fundamental heterogeneous database operations; (2) Translation of the query expressed over the global schema to an equivalent query expressed over a conceptual schema; (3) An estimation methodology to derive the intermediate result sizes of the database operations; (4) A query decomposition algorithm to generate an efficient sequence of the basic database operations to answer the query. This research addressed the first issue by developing an algebraic query language called cluster algebra. The cluster algebra consists of the following operations: (a) Selection, union, intersection and difference, which are extensions of their relational algebraic counterparts to heterogeneous databases; (b) Normal-join and normal-projection which replace their counterparts, join and projection, in the relational algebra; (c) Two new operators embed and unembed to restructure the database schema. The second issue of the query translation was addressed by development of an algorithm that translates a cluster algebra query expressed over the virtual views to an equivalent cluster algebra query expressed over the conceptual databases. A non-parametric estimation methodology to estimate the result size of a cluster algebra operation was developed to address the third issue described above. Finally, this research developed a query decomposition algorithm, applicable to the relational and non-relational databases, that decomposes a query by computing all profitable semi-join operations, followed by the determination of the best sequence of join operations per processing site. The join optimization is performed by formulating a zero-one integer linear program that uses the non-parametric estimation technique to compute the sizes of intermediate results. The query processing algorithm was implemented in the context of DAVID, a heterogeneous distributed database management system.
Ph. D.
11

Liang, Xing, und Yongyu Lu. „EVALUATION OF DATABASE MANAGEMENT SYSTEMS“. Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6255.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Qualitative and quantitative analysis of different database management systems (DBMS) have been performed in order to identify and compare those which address requirements such as public domain licensing, free of charge, high product support, ADO .NET Entity Framework compatibility, good performance, referential integrity, among others. More than 20 existing database management systems have been selected as possible candidates. Qualitative analysis reduced that number to 4 candidates DBMSs (PostgreSQL, SQLite, Firebird and MySQL). Quantitative analysis has been used to test the performance of these 4 DBMSs while performing the most common structured query language (SQL) data manipulation statements (INSERT, UPDATE, DELETE and SELECT). Referential integrity and easy to install were also evaluated for these 4 DBMSs. As results, Firebird is the most suitable DBMS which best addressed all desired requirements.
12

Peng, Rui. „Live video database management systems“. Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4609.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
With the proliferation of inexpensive cameras and the availability of high-speed wired and wireless networks, networks of distributed cameras are becoming an enabling technology for a broad range of interdisciplinary applications in domains such as public safety and security, manufacturing, transportation, and healthcare. Today's live video processing systems on networks of distributed cameras, however, are designed for specific classes of applications. To provide a generic query processing platform for applications of distributed camera networks, we designed and implemented a new class of general purpose database management systems, the live video database management system (LVDBMS). We view networked video cameras as a special class of interconnected storage devices, and allow the user to formulate ad hoc queries over real-time live video feeds. In the first part of this dissertation, an Internet scale framework for sharing and dissemination of general sensor data is presented. This framework provides a platform for general sensor data to be published, searched, shared, and delivered across the Internet. The second part is the design and development of a Live Video Database Management System. LVDBMS allows users to easily focus on events of interest from a multitude of distributed video cameras by posing continuous queries on the live video streams. In the third part, a distributed in-memory database approach is proposed to enhance the LVDBMS with an important capability of tracking objects across cameras.
ID: 029049951; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 96-101).
Ph.D.
Doctorate
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
13

Aleksic, Mario. „Incremental computation methods in valid and transaction time databases“. Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8126.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Ge, Shen, und 葛屾. „Advanced analysis and join queries in multidimensional spaces“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49799332.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Multidimensional data are ubiquitous and their efficient management and analysis is a core database research problem. There are lots of previous works focusing on indexing, analyzing and querying multidimensional data. In this dissertation, three challenging advanced analysis and join problems in multidimensional spaces are proposed and studied, providing efficient solutions to their related applications. First, the problem of generalized budget constrained optimization query (Gen-BOQ) is studied. In real life, it is often difficult for manufacturers to create new products dominating their competitors, due to some constraints. These constraints can be modeled by constraint functions, and the problem is then to decide the best possible regions in multidimensional spaces where the features of new products could be placed. Using the number of dominating and dominated objects, the profitability of these regions can be evaluated and the best areas are then returned. Although GenBOQ computation is challenging due to its high complexity, an efficient divide-and-conquer based framework is offered for this problem. In addition, an approximation method is proposed, making tradeoffs between the result quality and the query cost. Next, the efficient evaluation of all top-k queries (ATOPk) in multidimensional spaces is investigated, which compute the top ranked objects for a group of preference functions simultaneously. As an application of such a query, consider an online store, which needs to provide recommendations for a large number of users simultaneously. This problem is somewhat overlooked by past research; in this thesis, batch algorithms are proposed instead of naïvely evaluating top-k queries individually. Similar preferences are grouped together, and two algorithms are proposed, using block indexed nested loops and a view-based thresholding strategy. The optimized view-based threshold algorithm is demonstrated to be consistently the best. Moreover, an all top-k query helps to evaluate other queries relying on the results of multiple top-k queries, such as reverse top-k queries and top-m influential queries proposed in previous works. It is shown that applying the view-based approach to these queries can improve the performance of the current state-of-the-art by orders of magnitude. Finally, the problem of spatio-textual similarity joins (ST-SJOIN) on multidimensional data is considered. Given both spatial and textual information, ST-SJOIN retrieves pairs of objects which are both spatially close and textually similar. One possible application of this query is friendship recommendation, by matching people who not only live nearby but also share common interests. By combining the state-of-the-art strategies of spatial distance joins and set similarity joins, efficient query processing algorithms are proposed, taking both spatial and textual constraints into account. A batch processing strategy is also introduced to boost the performance, which is also effective for the original textual-only joins. Using synthetic and real datasets, it is shown that the proposed techniques outperform the baseline solutions.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
15

Vargas, Herring Luis Carlos. „Integrating databases and publish/subscribe“. Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609151.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Jakobovits, Rex M. „The Web interfacing repository manager : a framework for developing web-based experiment management systems /“. Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/7007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Goralwalla, Iqbal A. „Temporality in object database management systems“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ29042.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Huang, Pu. „Prototype virtual database for flood management“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0027/MQ51757.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Karatasios, Labros G. „Software engineering with database management systems“. Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27272.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Papastathi, Maria. „Database management system using IDEF methodologies“. Thesis, University of Salford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261919.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Proulx, Gisele Marie 1977. „Reconfigurable variation risk management database tool“. Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/89279.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Lord, Dale. „Relational Database for Visual Data Management“. International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604893.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Often it is necessary to retrieve segments of video with certain characteristics, or features, from a large archive of footage. This paper discusses how image processing algorithms can be used to automatically create a relational database, which indexes the video archive. This feature extraction can be performed either upon acquisition or in post processing. The database can then be queried to quickly locate and recover video segments with certain specified key features
23

Bourbonnais, Richard Joseph II. „Visual assessment and relational database management“. Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/43671.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Protection of the visual environment begins with a comprehensive documentation and evaluation of existing conditions followed by the development of guidelines pertaining to future alterations. This thesis examines existing methods of visual assessment and the needs of the land planner for the purpose of understanding the necessary components of evaluating the visual environment effectively. The objective has been to develop a new method of visual documentation and evaluation that can be utilized by land planners for the visual assessment of road corridors. In order to achieve this objective, a visual assessment of a Significant road corridor in Blacksburg, Virginia has been conducted. Various necessary components have been included in the assessment and a relational database management program has been used in the storage of all collected data. As a result of this process, it was found that a new method, which borrows from past processes, addresses the needs of the land planner, and utilizes an interactive medium for storage of data, is successful in addressing the objective. The new method has been successful in including the necessary components such as qualitative evaluation with adaptive descriptive nomenclature and photographic documentation of the existing corridor. The database has many qualities which are meaningful to land planners. Relational database management programs have the capability of storing text as well as photographs. For land planners to view the various aspects of the corridor, a simple pressing of their computer mouse button moves the them from one aspect to another.
Master of Landscape Architecture
24

Pensomboon, Gridsana. „Landslide Risk Management and Ohio Database“. University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1172782692.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Catley, Bruce L. (Bruce Leonard) Carleton University Dissertation Information and Systems Science. „Standardization issues in distributed database management“. Ottawa, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bourbonnais, Richard Joseph. „Visual assessment and relational database management /“. This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07112009-040335/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Soo, Michael Dennis 1962. „Constructing a temporal database management system“. Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290685.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Temporal database management systems provide integrated support for the storage and retrieval of time-varying information. Despite the extensive research which has been performed in this area over the last fifteen years, no commercial products exist and few viable prototypes have been constructed. It is our thesis that through the use of the proper abstractions, it is possible to construct a temporal database management system with robust semantics, without sacrificing performance, and with minimal implementation cost. Our approach parallels the development of relational database management systems, beginning with a theoretically sound abstract model, and then developing the underlying techniques to efficiently implement it. The major theme underlying this research is practicality, in terms of both semantics and implementation. We will show that expressive temporal semantics can be supported while still maintaining reasonable performance, and with relatively small implementation effort. This is made possible, in part, by minimally extending the relational model to support time, thereby allowing the reuse or easy adaptation of well-established relational technology. In particular, we investigate how relational database design, algebras, architectures, and query evaluation can be adapted or extended to the temporal context. Our aim is that software vendors could incorporate these results into existing non-temporal, commercial products with relatively small effort.
28

Ibrahim, Karim. „Management of Big Annotations in Relational Database Management Systems“. Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/272.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Annotations play a key role in understanding and describing the data, and annotation management has become an integral component in most emerging applications such as scientific databases. Scientists need to exchange not only data but also their thoughts, comments and annotations on the data as well. Annotations represent comments, Lineage of data, description and much more. Therefore, several annotation management techniques have been proposed to efficiently and abstractly handle the annotations. However, with the increasing scale of collaboration and the extensive use of annotations among users and scientists, the number and size of the annotations may far exceed the size of the original data itself. However, current annotation management techniques don’t address large scale annotation management. In this work, we propose three chapters to that tackle the Big annotations from three different perspectives (1) User-Centric Annotation Propagation, (2) Proactive Annotation Management and (3) InsightNotes Summary-Based Querying. We capture users' preferences in profiles and personalizes the annotation propagation at query time by reporting the most relevant annotations (per tuple) for each user based on time plan. We provide three Time-Based plans, support static and dynamic profiles for each user. We support a proactive annotation management which suggests data tuples to be annotated in case new annotation has a reference to a data value and user doesn’t annotate the data precisely. Moreover, we provide an extension on the InsightNotes: Summary-Based Annotation Management in Relational Databases by adding query language that enable the user to query the annotation summaries and add predicates on the annotation summaries themselves. Our system is implemented inside PostgreSQL.
29

Coats, Sidney M. (Sidney Mark). „The Object-Oriented Database Editor“. Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc500921/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Because of an interest in object-oriented database systems, designers have created systems to store and manipulate specific sets of abstract data types that belong to the real world environment they represent. Unfortunately, the advantage of these systems is also a disadvantage since no single object-oriented database system can be used for all applications. This paper describes an object-oriented database management system called the Object-oriented Database Editor (ODE) which overcomes this disadvantage by allowing designers to create and execute an object-oriented database that represents any type of environment and then to store it and simulate that environment. As conditions within the environment change, the designer can use ODE to alter that environment without loss of data. ODE provides a flexible environment for the user; it is efficient; and it can run on a personal computer.
30

Kern, Deborah R. „Design and implementation of the acoustic database and acoustic trainer modules for "ARGOS"“. Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA232052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 1990.
Thesis Advisor(s): Wu, C. Thomas. Second Reader: Lum, Vincent Y. "June 1990." Description based on signature page. DTIC Identifier(s): Software engineering, event driven multimedia database, acoustic database. Author(s) subject terms: Software engineering, event driven multimedia database, acoustic database. Includes bibliographical references (p. 96). Also available online.
31

Nulty, William Glenn. „Geometric searching with spacefilling curves“. Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/24520.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Willshire, Mary Jane. „The effects of inheritance on the properties of physical storage models in object oriented databases“. Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/9236.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Chan, Francis. „Knowledge management in Naval Sea Systems Command : a structure for performance driven knowledge management initiative“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FChan.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (M.S. in Product Development)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Mark E. Nissen, Donald H. Steinbrecher. Includes bibliographical references (p. 113-117). Also available online.
34

Shou, Yutao Sindy, und 壽玉濤. „Efficient query processing for spatial and temporal databases“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29853655.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Dalal, Kaushal R. „Database manager for Envision“. Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-04272010-020200/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Huang, Jianyuan. „Computer science graduate project management system“. CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3250.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This project is a development and tracking system for graduate students in the Department of Computer Science of CSUSB. This project will cover front-end web site development, back-end database design and security. This website provides secure access to information about ideas for projects, status on on-going projects, and reports of finished projects using My SQL and Apache Tomcat.
37

Jansson, Simon, und Theodor Sandström. „Mobile Framework for Real-Time Database Management“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219905.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The primary purpose of this thesis is to explore what issues may arise during development of a framework for handling and display of streamed real-time data. In addition to this, it also investigates how the display of different types of data, along with a change of execution platform, impacts execution time. Through the undertaking of two case studies, each split into developmental and an experimental phases, the thesis goes through the development of such a real-time data handling framework. The framework was developed in both stationary and mobile forms, and the developmental issues encountered along each of these paths are highlighted. Afterwards, the results gathered from performance tests run on each framework version were compared, in order to ascertain whether the handling and display of different data types, along with a change in execution platform, had had an impact upon the frameworks execution time. The results from the developmental observations revealed that the most commonly encountered issues were those relating to program latency, commonly due to sub-optimal program architecture along with connectivity issues encountered during data streaming. The second most encountered issue regarded the choice of an appropriate display method, in order to communicate changes in the displayed data along with correlation between several tracked data points. The results from the experimental comparisons revealed that while the impact on execution time caused by the use of calculated data, as opposed to raw data values, was marginal at most, a change of execution platform impacted said time drastically. By porting the framework to the mobile platform, the different processes whose execution time were measured during the tests experienced an increase in execution time ranging from 2405% all the way to 15860%. The authors recommend that the framework be developed towards gaining the ability to connect to any given relational database, and to handle and display the data therein, in order for it to have application areas other than as a test instrument. Further, the authors also recommend that additional tests be run on the framework using a wider variety of stationary and mobile devices, in order to determine whether the conclusions drawn from the results in the thesis hold up in the face of greater hardware variety.
Denna studies primära mål är att utforska vilka problem som kan uppstå under utveckling av ett ramverk för hantering och visande av streamad realtidsdata. Utöver det undersöks även hur visande av olika datatyper, ihop med ett byte av exekveringsplattform, påverkar exekveringstiden. Genom utförandet av två fallstudier, båda uppdelade i utvecklingsoch experimenteringsfaser, går denna studie igenom utvecklingen av ett sådant ramverk för hantering av realtidsdata. Ramverket utvecklades i både stationär och mobil form, och de utvecklingsrelaterade problem som påträffades i vardera fall belyses. Efteråt jämfördes resultaten framtagna genom prestandatester, som kördes på samtliga ramverksversioner, för att upptäcka om hantering och visning av olika datatyper, samt ett skifte av exekveringsplattform, hade påverkat ramverkets exekveringstid. Resultaten från de utvecklingsrelaterade observationerna visade att det mest påträffade problemet hade att göra med programlatens, vanligtvis p.g.a. ickeoptimal programarkitektur kombinerat med konnektivitetsproblem. Det näst mest påträffade problemet hade att göra med valet av en passande visningsmetod, för att kunna förmedla förändringar i den visade datan, samt korrelation mellan flera följda datapunkter. Resultaten från de experimentella jämförelserna visade att medan påverkan av exekveringstiden som uppstått genom användandet av kalkylerad data, till skillnad från rådatavärden, endast var marginell som bäst, påverkade förändringen av exekveringsplattform denna tid drastiskt. Genom att porta ramverket till den mobila plattformen upplevde de processer vars exekveringstid mättes under testerna en ökning från 2405% hela vägen upp till 15860%. Författarna rekommenderar att ramverket utvecklas mot förmågan att koppla till godtycklig databas, och att kunna hantera och visa datan från denna, för att ha ett användningsområde bortom användandet som testinstrument. Vidare rekommenderar även författarna att ytterliggare test utförs på ramverket med en större variation av stationära och mobila enheter, för att kunna bekräfta om slutsatserna som dragits utifrån resultaten av denna studie kvarstår efter att de utsatts för mer varierande hårdvara.
38

Karlsson, Jonas S. „Scalable distributed data structures for database management“. [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2000. http://dare.uva.nl/document/57022.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Benatar, Gil. „Thermal/structural integration through relational database management“. Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/19484.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Zou, Beibei 1974. „Data mining with relational database management systems“. Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82456.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
With the increasing demands of transforming raw data into information and knowledge, data mining becomes an important field to the discovery of useful information and hidden patterns in huge datasets. Both machine learning and database research have made major contributions to the field of data mining. However, there is still little effort made to improve the scalability of algorithms applied in data raining tasks. Scalability is crucial for data mining algorithms, since they have to handle large datasets quite often. In this thesis we take a step in this direction by extending a popular machine learning software, Weka3.4, to handle large datasets that can not fit into main memory by relying on relational database technology. Weka3.4-DB is implemented to store the data into and access the data from DB2 with a loose coupling approach in general. Additionally, a semi-tight coupling is applied to optimize the data manipulation methods by implementing core functionalities within the database. Based on the DB2 storage implementation, Weka3.4-DB achieves better scalability, but still provides a general interface for developers to implement new algorithms without the need of database or SQL knowledge.
41

Ma, Xuesong 1975. „Data mining using relational database management system“. Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98757.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
With the wide availability of huge amounts of data and the imminent demands to transform the raw data into useful information and knowledge, data mining has become an important research field both in the database area and the machine learning areas. Data mining is defined as the process to solve problems by analyzing data already present in the database and discovering knowledge in the data. Database systems provide efficient data storage, fast access structures and a wide variety of indexing methods to speed up data retrieval. Machine learning provides theory support for most of the popular data mining algorithms. Weka-DB combines properties of these two areas to improve the scalability of Weka, which is an open source machine learning software package. Weka implements most of the machine learning algorithms using main memory based data structure, so it cannot handle large datasets that cannot fit into main memory. Weka-DB is implemented to store the data into and access the data from DB2, so it achieves better scalability than Weka. However, the speed of Weka-DB is much slower than Weka because secondary storage access is more expensive than main memory access. In this thesis we extend Weka-DB with a buffer management component to improve the performance of Weka-DB. Furthermore, we increase the scalability of Weka-DB even further by putting further data structures into the database, which uses a buffer to access the data in database. Furthermore, we explore another method to improve the speed of the algorithms, which takes advantage of the data access properties of machine learning algorithms.
42

Dempster, Euan W. „Performance prediction for parallel database management systems“. Thesis, Heriot-Watt University, 2004. http://hdl.handle.net/10399/341.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Tavares, Julio Alcantara. „Database buffer management stratyegies for asymmetric media“. Universidade de Fortaleza, 2015. http://dspace.unifor.br/handle/tede/96251.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Made available in DSpace on 2019-03-29T23:53:05Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-10-05
For decades, hard disk drives (HDD) have dominated the scene with regard to data storage for large databases systems. HDDs may be considered a symmetric media because there is no difference between the time (and cost) for reading and writing data. As a counterpart, in the last years, a whole new class of storage media has raised, whose the main feature is to have no mechanical parts and, more importantly, to be asymmetric. In a asymmetric media, the time forreading data is faster than the time in which such media writes data. Depending on manufacturing, this asymmetry may reach a factor of 10 or even higher. Asymmetric storage impacts on the most important database management system components, more specifically, on the buffer manager. In this research, we propose database buffer replacement algorithms for asymmetric media. They try to take advantage from the use of a asymmetric media by keeping written(dirty) pages in main memory, postponing their writing down on media and also adapting to achieve good hit ratios. Keywords: Databases, Storage Class Memory, Buffer Management Policies.
Durante décadas, unidades de disco rígido (HDD) têm dominado o armazenamento de dados para grandes sistemas gerenciadores de banco de dados. Os HDDs podem ser considerados um meio simétrico porque não há diferença entre o tempo (custo) das operações de leitura e escrita. Em contrapartida, nos últimos anos, uma nova categoria de dispositivos de armazenamento ganhou notoriedade. Sua principal característica é não possuir partes mecânicas e, mais importante, serem assimétricos. Em uma mídia assimétrica, o tempo para a leitura de dados é bastante inferior ao tempo gasto por uma operação de escrita. A depender do fornecedor, a assimetria pode atingir um fator de 10 ou mais. Essa assimetria gera um impacto direto nos componentes mais importantes do SGBD, mais especificamente, no gerenciador de buffer. Neste trabalho, propomos algoritmos de substituição de páginas para mídias assimétricas. Os algoritmostentam se beneficiar da mídia assimétrica mantendo as páginas alteradas em memória principal, adiando a sua escrita para a mídia de armazenamento persistente e adaptando-se para também obter boas taxas de acerto. Palavras-chave: Bancos de Dados, Mídias Assimétricas, Políticas de Gerenciamento de Buffer.
44

Wilson, Amanda Janice. „Database Marketing Management Strategies for Agricultural Lenders“. Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36734.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This study examines the use of databases to improve marketing techniques and customer segmentation in lending institutions. Specifically, this study examines the use of products and services by agricultural customers, and then determines the relationship between the use of those products and services with farm business characteristics. Information is also obtained on the interest rate sensitivity of the producers and correlated with farm business characteristics. The importance of technology and strategic alliances and other influences in the decision making process are determined after survey analysis. The survey was sent to producers who had some type of loan. Respondents from this study used an average of 3.2 loan products and 7.6 services for a total of 10.8 loans and services. Only 1 percent of the respondents indicated that they did not have a personal checking account. Twelve percent of the respondents indicated that they did not use a credit card. Only 16 percent of the respondents indicated that they used leasing services. Investment products did not have a high percentage of use. Thirty-three percent indicated they were using certificates of deposit, while only 21 percent indicated the use of money market funds, and 30 percent indicated the use of mutual funds. Thirty-seven percent indicated they were using IRAs. However, most of the respondents were using some form of insurance. Three-fourths of the respondents were using life insurance, while only 21 percent indicated that they did not possess disability insurance. Other services were also analyzed in this study. Only 15 percent of the respondents indicated that they were utilizing estate planning services, despite the 67 percent of respondents who were greater than age 41 and the 58 percent of respondents with greater than $500,000 in assets. Seventeen percent of the respondents were using an appraisal service. Due to the lower levels of usage for the investment products, this study focused on the relationship between farm characteristics and the investment products. This study showed that a relationship existed between farm and non-farm income with IRA usage. iii Only farm income had a relationship with money market fund usage and mutual fund usage. While, the use of estate plans was related to asset level. The analysis on interest rate sensitivity was determined by the amount an interest rate would have to decrease for a producer to switch lending institutions. The producers who were found to be less interest rate sensitive were those who had lower farm and non-farm incomes, lower asset levels, lower education levels, higher debt-to-asset ratio, and those who owned a computer. This implies that these are the more loyal customers to an institution or perhaps these producers have fewer opportunities to switch institutions. Producers in this study indicated that when selecting a lender/service provider, a competitive interest rate (76 percent of respondents) and the institution being a dependable source of credit (75 percent) was important. Knowledge of agriculture was also very important (69 percent of respondents). Internet banking and educational seminars rated as the characteristics that were least important, 3 percent and 9 percent, respectively. However, in the decision making process, lenders (69 percent of respondents), accountants (53 percent), and veterinarians (38 percent) were shown to be very important. The spouse/partner has considerable influence also on decision making. Sixty-seven percent of the respondents indicated that the spouse/partner had a considerable influence on investment decision, while sixty-one percent of the respondents indicated that the spouse/partner had a considerable influence on credit decisions. Five specific recommendations were made to the institutions following this study. These recommendations include: use of technology, institutional use of databases, use of influencers, and targeting and segmenting the marketplace.
Master of Science
45

Helmer, Sven. „Performance enhancements for advanced database management systems /“. [S.l. : s.n.], 2000. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB8952361.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Hall, Neil Scott. „Impact of data modeling and database implementation methods on the optimization of conceptual aircraft design“. Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/16847.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Mohamad, Baraa. „Medical Data Management on the cloud“. Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22582.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Résumé indisponible
Medical data management has become a real challenge due to the emergence of new imaging technologies providing high image resolutions.This thesis focuses in particular on the management of DICOM files. DICOM is one of the most important medical standards. DICOM files have special data format where one file may contain regular data, multimedia data and services. These files are extremely heterogeneous (the schema of a file cannot be predicted) and have large data sizes. The characteristics of DICOM files added to the requirements of medical data management in general – in term of availability and accessibility- have led us to construct our research question as follows:Is it possible to build a system that: (1) is highly available, (2) supports any medical images (different specialties, modalities and physicians’ practices), (3) enables to store extremely huge/ever increasing data, (4) provides expressive accesses and (5) is cost-effective .In order to answer this question we have built a hybrid (row-column) cloud-enabled storage system. The idea of this solution is to disperse DICOM attributes thoughtfully, depending on their characteristics, over both data layouts in a way that provides the best of row-oriented and column-oriented storage models in one system. All with exploiting the interesting features of the cloud that enables us to ensure the availability and portability of medical data. Storing data on such hybrid data layout opens the door for a second research question, how to process queries efficiently over this hybrid data storage with enabling new and more efficient query plansThe originality of our proposal comes from the fact that there is currently no system that stores data in such hybrid storage (i.e. an attribute is either on row-oriented database or on column-oriented one and a given query could interrogate both storage models at the same time) and studies query processing over it.The experimental prototypes implemented in this thesis show interesting results and opens the door for multiple optimizations and research questions
48

Jermaine, Christopher. „Approximate answering of aggregate queries in relational databases“. Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/9221.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Hardock, Sergej. „Flash-aware Database Management Systems“. Phd thesis, 2020. https://tuprints.ulb.tu-darmstadt.de/14476/1/20200730-Flash-aware-Database-Management-Systems.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Flash SSDs are becoming the primary storage technology for single servers and large data centers. In contrast to conventional magnetic disks, which were dominating the storage market for more than 40 years, Flash offers significantly more performance, consumes less energy and has lower cost per IOPS (I/O Operations Per Second). Besides these advantages, an important role in establishment and quick proliferation of Flash storage was played by the black-box design of SSDs, which guaranteed their backwards compatibility with the traditional hard disk drives. This makes the replacement of HDDs seamless as the software stack, including the application, does not require any adjustment. However, such design of SSDs has multiple disadvantages, which become especially critical for database management systems. The backwards compatibility of SSDs is encapsulated in the so-called Flash translation layer (FTL). FTL is a set of Flash management tasks that typically run on device and mask the native behavior of Flash memory. In other words, FTL creates a black-box over Flash memory and emulates the behavior of HDDs. The fact that the database system has no knowledge about FTL, and has no control over the physical data placement on Flash, results in high I/O overhead, which is caused by suboptimal realization of Flash management tasks and functional redundancy along the critical I/O path. Thus, write-amplification of conventional SSDs used in traditional ’cooked’ storage architecture (i.e., with file system indirection) can be as high as 15x, i.e., a single 4KB write request submitted by the DBMS can turn into 60KB being physically written on Flash storage. As a result, the effective I/O throughput and longevity expectations of SSDs are significantly lower than those of Flash memory encapsulated in these SSDs. In this work we describe our approach - the NoFTL storage architecture - that aims to solve the aforementioned disadvantages of modern Flash SSDs. The basic idea behind the NoFTL is to give the full control over the underlying Flash storage to the database management system, which in turn assumes elimination of all intermediate abstraction layers (file system, block device layer and FTL) between the DBMS and physical storage. NoFTL consists of three main elements - (i) native Flash interface; (ii) integration of Flash management into subsystems of the DBMS; and (iii) the concept of configurable Flash storage. The interplay of them allows us to realize the whole performance potential of Flash memory. Native Flash interface allows the DBMS to control physical data placement on Flash storage, and to utilize the computational power of the SSD to perform near-data processing. Integration of typical Flash management tasks (address translation, garbage collection and wear leveling) into different subsystems of the DBMS leads to an optimization of these tasks and of native DBMS algorithms. The concept of configurable Flash storage is a unique approach to organize and manage data on Flash SSDs. With the help of novel storage abstractions, the database system can perform intelligent data placement by clustering objects into different regions. Moreover, for each such region the DBMS can apply a separate set of Flash management algorithms, which would be optimal for data assigned to that region. All this reduces the write-amplification of SSDs to a minimum (up to 15x reduction for OLTP workloads), improves the overall system performance, and significantly increases the lifetime of Flash SSDs (up to 30x improvement). We have realized the NoFTL prototype on an open-source database engine and evaluated it under various scenarios and on different testbeds.
50

Nassis, Antonios. „Object oriented database management systems“. Diss., 1995. http://hdl.handle.net/10500/15581.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Modern data intensive applications, such as multimedia systems require the ability to store and manipulate complex data. The classical Database Management Systems (DBMS), such as relational databases, cannot support these types of applications efficiently. This dissertation presents the salient features of Object Database Management Systems (ODBMS) and Persistent Programming Languages (PPL), which have been developed to address the data management needs of these difficult applications. An 'impedance mismatch' problem occurs in the traditional DBMS because the data and computational aspects of the application are implemented using two different systems, that of query and programming language. PPL's provide facilities to cater for both persistent and transient data within the same language, hence avoiding the impedance mismatch problem. This dissertation presents a method of implementing a PPL by extending the language C++ with pre-compiled classes. The classes are first developed and then used to implement object persistence in two simple applications.
Computing
M. Sc. (Information Systems)

Zur Bibliographie