To see the other types of publications on this topic, follow the link: Driving database.

Dissertations / Theses on the topic 'Driving database'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Driving database.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rasheed, Yasser. "A database solution for scientific data from driving simulator studies." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70435.

Full text
Abstract:
Many research institutes produce a huge amount of data. It was said by someone that “We are drowning in data, but starving of information”. This is particularly true for scientific data. The needs and the advantages of being able to search data from different experiments are increasing in order to look for differences and similarities among them and thus doing Meta studies. A Meta-study is the method that takes data from different independent studies and integrate them using statistical analysis. If data is well described and data access is flexible then it is possible to establish unexpected relationships among data. It also helps in the re-using of data from studies that have already been conducted which saves time, money and resources. In this thesis, we explore at the ways to store data from experiments and to make finding cross-experiments more efficient. The main aim of this thesis work is to propose a database solution for storing time series data generated by different simulators and to investigate the feasibility of using it with ICAT. ICAT is a metadata system used for searching and browsing of scientific data. This thesis has been completed in two steps. The first step is aimed at proposing an efficient database solution for storing time series data. The second step is aimed at investigating the feasibility of using ICAT and proposed database solution together. We found out that it is feasible to use ICAT as a metadata system for scientific studies. Since it is free and open source, it can be linked to any system and customized according to the needs.
APA, Harvard, Vancouver, ISO, and other styles
2

Tan, Kaige. "Building verification database and extracting critical scenarios for self-driving car testing on virtual platform." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263927.

Full text
Abstract:
This degree project, conducted at Volvo Cars, investigates a method about how to build a test database for an Autonomous Driving (AD) function on a virtual platform and how to extract critical scenarios from the test database to finish test cases reduction through optimization. The virtual platform under study is the model-in-the-loop (MIL) based Simulation Platform Active Safety (SPAS) environment and the optimization tool being used is modeFrontier. The analyzing process, in which three levels of abstraction for scenarios are proposed in order to fulfill all requirements for an AD function, is followed in the project to build the test database. Application is carried out to transform requirements from a specific Operational Design Domain (ODD) and linguistic representation into a test suite which contains concrete scenarios and test cases. A meta-model is built to help analyze system structure and parameter requirements in the level of logical scenarios. The practicability of a scenario-based approach for the design of AD function test cases generation is demonstrated with the example of building Traffic Congestion Support (TCS) test database. Obtaining the test database and the successful analysis of parameters for the TCS function on the MIL platform lead to the main goal of the thesis project, which is finding edge cases in the test database by optimizing objective functions. By defining the objective functions and building the workflow in modeFrontier after trying with different methods, the optimization process is implemented with two different algorithms separately. pilOPT is evaluated as a better solution for AD function than Multi-Objective Simulated Annealing (MOSA) in terms of computational time and edge cases finding. In addition, a noise model is added to ideal sensor model in SPAS to study the influence of noise in real test track. The result shows a big difference in Time-toCollision value, which is a defined objective function in the project. This indicates more test cases are deteriorated to critical scenario if noise is taken into consideration, which shows the influence of noise cannot be neglected during testing.
Detta examensarbete, genomfört hos Volvo Cars, undersöker en uppbyggnadsmetod av en testdatabas för Autonomous Driving (AD) på en virtuell plattform och hur man bör extrahera kritiska scenarier från testdatabasen för att reducera antalet testfall genom optimering. Den aktuella virtuella plattformen är den model-in-the-loop (MIL) baserade Simulation Platform Active Safety (SPAS) miljön och optimeringsverktyget som användes är modeFrontier. Analysprocessen, i vilken tre abstraktionsnivåer för scenarier är föreslagna i syfte att satisfiera alla kraven för AD, redogörs för i detta projekt. Tillämpning har genomförts för att transformera krav från en specifik Operational Design Domain (ODD) samt lingvistisk representation till en testsvit som innehåller konkreta scenarier och testfall. En metamodell har konstruerats för att assistera med analysen av systemstrukturen och parameterkraven i nivån av logiska scenarier. Genomförbarheten av en scenariobaserad infallsvinkel för designen av AKF-testfall demonstreras med exemplet av konstruktionen av Traffic Congestion Support (TCS)- testdatabasen. Erhållandet av testdatabasen och den framgångsrika analysen av parametrarna för TCSfunktionen på MIL-plattformen ledde till det huvudsakliga målet med examenarbetet, vilket var att identifiera kantfall i testdatabasen genom att optimera objektfunktioner. Genom att definiera objektfunktionerna och konstruera arbetsflödet i modeFrontier efter flertalet försök med olika metoder, implementerades optimeringsprocessen med tvåseparata algoritmer. pilOPT evalueras som en bättre lösning för AD jämfört med Multi-Objective Simulated Annealing (MOSA) med avseende på beräkningstid och identifiering av kantfall. Dessutom har brus adderats till den ideala sensormodellen i SPAS för att studera inflytandet av brus i en verklig testmiljö. Resultaten visar på en stor skillnad i tid-till-kollisionsvärde, vilket är en väldefinierad objektfunktion i projektet. Detta indikerar att fler testfall har försämrats till ett kritiskt scenario om brus tas man tar hänsyn till brus, vilket visar på att inflytandet av brus inte kan försummas under testning.
APA, Harvard, Vancouver, ISO, and other styles
3

NAKAMURA, Satoshi, Kazuya TAKEDA, and Masakiyo FUJIMOTO. "CENSREC-3: An Evaluation Framework for Japanese Speech Recognition in Real Car-Driving Environments." Institute of Electronics, Information and Communication Engineers, 2006. http://hdl.handle.net/2237/15050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Šimoňáková, Sabína. "Detekce stresu a únavy v komplexních datech řidiče." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442500.

Full text
Abstract:
Main aim of our thesis is fatigue and stress detection from biological signals of a driver. Introduction contains information on published methods of detection and thoroughly informs readers about theoretical background necessary for our thesis. In the practical application we have firstly worked with a database of measured rides and subsequently chose their most relevant sections. Extraction and selection of features followed afterward. Five different classification models for tiredness and stress detection were used in the thesis and prediction was based on actual data. Lastly, the final section compares the best model of our thesis with the already published results.
APA, Harvard, Vancouver, ISO, and other styles
5

Borkar, Amol. "Multi-viewpoint lane detection with applications in driver safety systems." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43752.

Full text
Abstract:
The objective of this dissertation is to develop a Multi-Camera Lane Departure Warning (MCLDW) system and a framework to evaluate it. A Lane Departure Warning (LDW) system is a safety feature that is included in a few luxury automobiles. Using a single camera, it performs the task of informing the driver if a lane change is imminent. The core component of an LDW system is a lane detector, whose objective is to find lane markers on the road. Therefore, we start this dissertation by explaining the requirements of an ideal lane detector, and then present several algorithmic implementations that meet these requirements. After selecting the best implementation, we present the MCLDW methodology. Using a multi-camera setup, MCLDW system combines the detected lane marker information from each camera's view to estimate the immediate distance between the vehicle and the lane marker, and signals a warning if this distance is under a certain threshold. Next, we introduce a procedure to create ground truth and a database of videos which serve as the framework for evaluation. Ground truth is created using an efficient procedure called Time-Slicing that allows the user to quickly annotate the true locations of the lane markers in each frame of the videos. Subsequently, we describe the details of a database of driving videos that has been put together to help establish a benchmark for evaluating existing lane detectors and LDW systems. Finally, we conclude the dissertation by citing the contributions of the research and discussing the avenues for future work.
APA, Harvard, Vancouver, ISO, and other styles
6

Burgoyne, John. "Stochastic processes and database-driven musicology." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107704.

Full text
Abstract:
For more than a decade, music information science and musicology have been at what Nicholas Cook has described as a 'moment of opportunity' for collaboration on database-driven musicology. The literature contains relatively few examples of mathematical tools that are suitable for analysing temporally structured data like music, however, and there are surprisingly few large databases of music that contain information at the semantic levels of interest to musicologists. This dissertation compiles a bibliography of the most important concepts from probability and statistics for analysing musical data, reviews how previous researchers have used statistics to study temporal relationships in music, and presents a new corpus of carefully curated chord labels from more than 1000 popular songs from the latter half of the twentieth century, as ranked by Billboard magazine's Hot 100 chart. The corpus is based on a careful sampling methodology that maintained cost efficiency while ensuring that the corpus is well suited to drawing conclusions about how harmonic practises may have evolved over time and to what extent they may have affected songs' popularity. This dissertation also introduces techniques new to the musicological community for analysing databases of this size and scope, most importantly the Dirichlet-multinomial distribution and constraint-based structure learning for causal Bayesian networks. The analysis confirms some common intuitions about harmonic practises in popular music and suggests several intriguing directions for further research.
Depuis plus d'une décennie, la science de l'information de la musique et la musicologie sont à ce que Nicholas Cook décrit comme "un moment clé" en ce qui concerne une collaboration pouvant mener à une réelle science de la musique fondé sur l'analyse de large quantité de données. Toutefois, la littérature comporte rélativement peu d'exemples d'outils mathématiques qui conviendraient à l'analyse des données qui, comme les données musicales, ont des dépendances temporelles, et il y a très peu de bases de données qui contiennent des informations avec la richesse sémantique intéressant d'ordinaire les musicologues. Cette thèse assemble une bibliographie des concepts les plus importants de la probabilité et de la statistique pour analyser les données musicales, revisite la manière dont les chercheurs précédents se servaient de la statistique pour étudier les rapports temporels, et présente un nouveau corpus soigneusement préparé contenant les transcriptions d'accords pour plus de 1000 chansons populaires de la deuxième moitié du XXe siècle, figurant du "Hot 100" de la revue Billboard. Le corpus résulte d'une méthodologie d'échantillonnage qui optimise les coûts et s'assure que le corpus conviendrait à tirer des conclusions montrant comment les pratiques harmoniques ont pu évoluer au fils du temps et dans quelle mesure elles peuvent avoir une incidence sur la popularité des chansons. Cette thèse introduit aussi quelques techniques qui sont nouvelles en musicologie pour analyser les bases de données d'une telle taille et d'une telle portée; les plus importantes parmi ces techniques sont la distribution multinomiale de Dirichlet et l'apprentissage via contraintes de structure des réseaux causaux de Bayes. L'analyse confirme quelques intuitions courantes concernant les pratiques harmoniques de la musique populaire et suggère quelques voies intéressantes pour la recherche à venir.
APA, Harvard, Vancouver, ISO, and other styles
7

Naumann, Felix. "Quality-driven query answering for integrated information systems /." Berlin [u.a.] : Springer, 2002. http://www.loc.gov/catdir/enhancements/fy0817/2002023684-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guo, Dahai. "Creating Geo-Specific Road Databases from Aerial Photos for Driving Simulation." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2789.

Full text
Abstract:
Geo-specific road database development is important to a driving simulation system and a very labor intensive process. Road databases for driving simulation need high resolution and accuracy. Even though commercial software is available on the market, a lot of manual work still has to be done when the road crosssectional profile is not uniform. This research deals with geo-specific road databases development, especially for roads with non-uniform cross sections. In this research, the United States Geographical Survey (USGS) road information is used with aerial photos to accurately extract road boundaries, using image segmentation and data compression techniques. Image segmentation plays an important role in extracting road boundary information. There are numerous methods developed for image segmentation. Six methods have been tried for the purpose of road image segmentation. The major problems with road segmentation are due to the large variety of road appearances and the many linear features in roads. A method that does not require a database of sample images is desired. Furthermore, this method should be able to handle the complexity of road appearances. The proposed method for road segmentation is based on the mean-shift clustering algorithm and it yields a high accuracy. In the phase of building road databases and visual databases based on road segmentation results, the Linde-Buzo-Gray (LBG) vector quantization algorithm is used to identify repeatable cross section profiles. In the phase of texture mapping, five major uniform textures are considered - pavement, white marker, yellow marker, concrete and grass. They are automatically mapped to polygons. In the chapter of results, snapshots of road/visual database are presented.
Ph.D.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
9

Heath, Michael Adam. "Asynchronous Database Drivers." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2387.

Full text
Abstract:
Existing database drivers use blocking socket I/O to exchange data with relational database management systems (RDBMS). To concurrently send multiple requests to a RDBMS with blocking database drivers, a separate thread must be used for each request. This approach has been used successfully for many years. However, we propose that using non-blocking socket I/O is faster and scales better under load. In this paper we introduce the Asynchronous Database Connectivity in Java (ADBCJ) framework. ADBCJ provides a common API for asynchronous RDBMS interaction. Various implementations of the ADBCJ API are used to show how utilizing non-blocking socket I/O is faster and scales better than using conventional database drivers and multiple threads for concurrency. Our experiments show a significant performance increase when using non- blocking socket I/O for asynchronous RDBMS access while using a minimal number of OS threads. Non-blocking socket I/O enables the ability to pipeline RDBMS requests which can improve performance significantly, especially over high latency networks. We also show the benefits of asynchronous database drivers on different web server architectures.
APA, Harvard, Vancouver, ISO, and other styles
10

Kallem, Aditya. "Visualization for Verification Driven Learning in Database Studies." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/99.

Full text
Abstract:
This thesis aims at developing a data visualization tool to enhance database learning based on the Verification Driven Learning (VDL) model. The goal of the VDL model is to present abstract concepts in the contexts of real-world systems to students in the early stages of computer science program. In this project, a personnel/training management system has been turned into a learning platform by adding a number of features for visualization and quizzing. We have implemented various tactics to visualize the data manipulation and data retrieval operations in database, as well as the message contents in data messaging channels. The results of our development have been utilized in eight learning cases illustrating the applications of our visualization tool. Each of these learning cases were made by systematically implanting bugs in a functioning component; the students are assigned to identify the bugs and at the same time to learn the structure of the software system active
APA, Harvard, Vancouver, ISO, and other styles
11

Baise, Paul. "Cogitator : a parallel, fuzzy, database-driven expert system." Thesis, Rhodes University, 1994. http://hdl.handle.net/10962/d1006684.

Full text
Abstract:
The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages.
KMBT_223
APA, Harvard, Vancouver, ISO, and other styles
12

Von, Dollen Andrew C. "Data-Driven Database Education: A Quantitative Study of SQL Learning in an Introductory Database Course." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2068.

Full text
Abstract:
The Structured Query Language (SQL) is widely used and challenging to master. Within the context of lab exercises in an introductory database course, this thesis analyzes the student learning process and seeks to answer the question: ``Which SQL concepts, or concept combinations, trouble students the most?'' We provide comprehensive taxonomies of SQL concepts and errors, identify common areas of student misunderstanding, and investigate the student problem-solving process. We present an interactive web application used by students to complete SQL lab exercises. In addition, we analyze data collected by this application and we offer suggestions for improvement to database lab activities.
APA, Harvard, Vancouver, ISO, and other styles
13

Zoller, Peter. "HMT modeling interactive and adaptive database-driven hypermedia applications /." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=962067296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Culver, Randy. "A DATABASE-DRIVEN SOFTWARE SYSTEM FOR SATELLITE TELEMETRY DECOMMUTATION." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608565.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
Satellite Telemetry can be characterized as having relatively low bandwidths, complex wavetrains, and very large numbers of measurands. Ground systems which monitor on-orbit vehicles must process, analyze, display, and archive the telemetry data received during contacts with the satellites. Data from perhaps thousands of individual measurands must be extracted from very complex wavetrains and processed during a live contact. Most commercially available telemetry systems are not well suited to handling satellite wavetrains because they were built for range telemetry and flight test applications which typically deal with limited numbers of measurands. This paper describes the design of a software system which was built specifically to process satellite telemetry. The database-driven system performs full decommutation of very complex wavetrains entirely in software. The system provides for defining the satellite vehicle's telemetry in multiple databases which define the wavetrain formats, the measurands themselves, how they are to be processed, and associated data conversion and calibration information. The database accommodates the complexities typically found in satellite telemetry such as multiple wavetrain formats, embedded streams, measurand dependencies, segmented measurands, and supercommutated, subcommutated, and sub-subcommutated data. A Code Generator builds a set of control structures from the wavetrain and measurand definitions in the database. It then generates highly optimized in-line software libraries for processing the satellite vehicle's telemetry. These libraries are linked to a Server process for run-time execution. During execution, raw telemetry frames are passed to the Server which uses the libraries to decommutate, limit check, convert, and calibrate the measurand data. A Client process attaches to the Server process to allow user applications to access both raw and processed telemetry for display, logging, and additional processing.
APA, Harvard, Vancouver, ISO, and other styles
15

Chan, Francis. "Knowledge management in Naval Sea Systems Command : a structure for performance driven knowledge management initiative." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FChan.pdf.

Full text
Abstract:
Thesis (M.S. in Product Development)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Mark E. Nissen, Donald H. Steinbrecher. Includes bibliographical references (p. 113-117). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
16

CHOOBINEH, JOOBIN. "FORM DRIVEN CONCEPTUAL DATA MODELING (DATABASE DESIGN, EXPERT SYSTEMS, CONCEPTUAL)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188043.

Full text
Abstract:
Conceptual data schema is constructed from the analysis of the business forms which are used in an enterprise. In order to peform the analysis a data model, a forms model, and heuristics to map from the forms model to the data model are developed. The data model we use is an extended version of the Entity-Relationship Model. Extensions include the addition of the min-max cardinalities and generalization hierarchy. By extending the min-max cardinalities to attributes we capture a number of significant characteristics of the entities in a concise manner. We introduce a hierarchical model of forms. The model specifies various properties of each form field within the form such as their origin, hierarchical structure, and cardinalities. The inter-connection of the forms is expressed by specifying which form fields flow from one form to another. The Expert Database Design System creates a conceptual schema by incrementally integrating related collections of forms. The rules of the expert system are divided into six groups: (1) Form Selection, (2) Entity Identification, (3) Attribute Attachment, (4) Relationship Identification, (5) Cardinality Identification, and (6) Integrity Constraints. The rules of the first group use knowledge about the form flow to determine the order in which forms are analyzed. The rules in other groups are used in conjunction with a designer dialogue to identify entities, relationships, and attributes of a schema that represents the collection of forms.
APA, Harvard, Vancouver, ISO, and other styles
17

Gele, Julie Katherine. "The chimaera project: an online database of animal motions." Texas A&M University, 2007. http://hdl.handle.net/1969.1/85882.

Full text
Abstract:
Digital animators will save vast amounts of project time by starting with a completed skeleton and some base animations. This result can be accomplished with Web 2.0 technologies by creating a repository of skeletons and animations that any animator may use for free. While free May™ a skeletons currently exist on the Internet, the websites housing them have only brief features and functions for browsing and interacting with these files. None of these websites contain downloadable animations for the provided skeletons. The Chimaera Project improves the field of Web 2.0 sites offering free rigs by offering many new features and freedoms to the animation community. Users may upload and download Maya™ skeletons, share comments and tips with each other, upload animations associated with the skeletons, and search or browse the skeletons in a variety of ways. The skeletons include descriptions and information provided by the creator and are categorized by class, order, and species. Users may access a freely provided script called "zooXferAnim" to import and export animations into text files to be uploaded and downloaded on the website. Many animations per skeleton may be uploaded. The Chimaera Project extends the Web 2.0 community by creating an interactive resource for animators to contribute and share content in a better, more organized format than previously seen on the Internet.
APA, Harvard, Vancouver, ISO, and other styles
18

Wad, Charudatta V. "QoS : quality driven data abstraction for large databases." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-020508-151213/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kharboutli, Zacky. "NoSQL Database Selection Focused on Performance Criteria for Web-driven Applications." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-88608.

Full text
Abstract:
This paper delivers a comparative analysis of the performance of three of the NoSQL technologies in Web applications. These technologies are graph stores, key-value stores, and document stores. The study aims to assist developers and organizationsin picking the suitable NoSQL solution for their application. For this purpose, three identical e-book applications were developed. Each of these is connected to adatabase from the selected technologies to examine how they perform compared toeach other against various performance measures.
APA, Harvard, Vancouver, ISO, and other styles
20

Jarratt, Nicholas. "An efficient three-dimensional database driven approach for multi scale homogenization." Master's thesis, Faculty of Engineering and the Built Environment, 2019. http://hdl.handle.net/11427/30801.

Full text
Abstract:
The two-scale homogenization theory, commonly known as the FE2 method, is a well-established technique used to model structures made of heterogeneous materials. Capable of capturing the microscopic effects at the macro level, the FE2 method assigns a representative volume element (RVE) of the materials microstructure at points across the macroscopic sample. This process results in the realization of a fully nested boundary value problem, where macroscopic quantities, required to model the structure, are obtained by homogenizing the RVEs response to macroscopic deformations. A limitation of the FE2 method though is the high computational costs, whereby its reduction has been a topic of much research in recent years. In this research, a two-scale database (TSD) model is presented to address this limitation. Instead of homogenizing the RVEs response to macroscopic deformations, the macroscopic quantities are now approximated using a database of precomputed RVEs. The homogenized results of an RVE are stored in a macroscopic right Cauchy-Green strain space. Discretizing this strain space into a finite set of right Cauchy-Green deformation tensors yields a material database, where the components of each tensor represent the boundary conditions prescribed to the RVE. A continuous approximation of the macroscopic quantities is attained using the Moving Least Squares (MLS) approximation method. Subsequent attention is paid to the implementation of the FE2 method and TSD model, for solving structures made of hyperelastic heterogeneous materials. Both approaches are developed in the in-house simulation software SESKA. A qualitative comparison of results from the FE2 method to those previously published, for a laminated composite beam undergoing material degradation, is presented to verify its implementation. To assess the TSD models performance, an evaluation into the numerical accuracy and computational performance, against the conventional FE2 method, is undertaken. While a significant improvement on computational times was shown, the accuracies in the TSD model were still left to be desired. Various remedies to improve the accuracy of the TSD model are proposed.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Hong. "Online banking : a case study for dynamic database-driven client/server system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/MQ47856.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lima, Natália Sarmanho Monteiro. "Busca de biomoléculas com potencial biotecnológico em database metagenômico por sequence-driven /." Jaboticabal, 2017. http://hdl.handle.net/11449/151420.

Full text
Abstract:
Orientador: Eliana Gertrudes de Macedo Lemos
Coorientador: Elisângela Soares Gomes Pepe
Coorientador: Mariana Rangel Pereira
Banca: Gustavo Orlando Bonilla Rodriguez
Banca: João Martins Pizauro Junior
Resumo: O mercado industrial está se tornando cada vez mais dependente do uso de enzimas como catalisadores. Dentre as enzimas utilizadas industrialmente estão as de origem microbiana, por serem ativas nas mais diversas condições. As lacases são enzimas com alto potencial, pois atuam sobre diversos compostos, o que possibilita uma vasta diversidade de possíveis aplicações. Entre as formas de prospecção de novas enzimas microbianas, encontra-se a metagenômica. Diante disto, foi realizada a busca por potenciais lacases seguindo uma estratégia "sequence-driven" em uma base de dados metagenomicos composta por amostras de diversos ambientes, pertencente ao Laboratório de Bioquímica de Microrganismos e Plantas (LBMP). As ORFs previamente anotadas por similaridade de sequências e domínios conservados como possíveis "Multicopper oxidases" (grupo no qual se encontram as lacases) foram prospectadas pela região conservada para lacases L3 (HLHGH). Esta estratégia retornou três ORFs, a ORF50MF; a ORF51ME e a ORF13SE (lacmeta), as quais foram clonadas e submetidas as análises de super-expressão e purificação. LacMeta, similar em 82% com a sequência de proteína hipotética de Streptomyces rubidus [WP_069465126.1], foi a proteína que apresentou o melhor resultado de expressão e purificação, sendo então submetida a caracterização cinética. A proteína fusionada a uma cauda de hexa-histidina foi purificada na forma homo-trimérica, com 107 kDa, e apresentou atividade ótima em pH ácido para o substrato AB... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The industrial market is becoming increasingly dependent on the use of enzymes as biocatalysts. Among the enzymes used industrially are those of microbial origin, because they are active in the most diverse conditions. Laccases are enzymes with high potential, since they act on several compounds, which allows a wide diversity of possible applications. Among the search for new forms of microbial enzymes, is the metagenomic. In this way, the search for potential laccases was carried out following a sequence-driven strategy in a metagenomic database composed of samples from different environments, belonging to the Laboratory of Biochemistry of Microorganisms and Plants (LBMP). The ORFs previously annotated by similarity of sequences and conserved domains as possible "Multicopper oxidases" (group in which the laccases are) were prospected by the region conserved for laccases L3 (HLHGH). This strategy returned three ORFs, the ORF50MF; the ORF51ME and the ORF13SE (lacmeta), which were cloned and subjected to super-expression and purification analyzes. LacMeta, similar in 82% to the hypothetical protein sequence of Streptomyces rubidus [WP_069465126.1], was the protein that presented the best expression and purification result and was then subjected to kinetic characterization. The protein fused to a hexa-histidine tail was purified in the homo-trimeric form with 128.22 kDa and presented optimum acidic pH activity for the ABTS substrate, for which obtained the catalytic parameters k... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
23

Zabransky, Douglas Milton. "Incorporating Obfuscation Techniques in Privacy Preserving Database-Driven Dynamic Spectrum Access Systems." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85001.

Full text
Abstract:
Modern innovation is a driving force behind increased spectrum crowding. Several studies performed by the National Telecommunications and Information Administration (NTIA), Federal Communications Commission (FCC), and other groups have proposed Dynamic Spectrum Access (DSA) as a promising solution to alleviate spectrum crowding. The spectrum assignment decisions in DSA will be made by a centralized entity referred to as as spectrum access system (SAS); however, maintaining spectrum utilization information in SAS presents privacy risks, as sensitive Incumbent User (IU) operation parameters are required to be stored by SAS in order to perform spectrum assignments properly. These sensitive operation parameters may potentially be compromised if SAS is the target of a cyber attack or an inference attack executed by a secondary user (SU). In this thesis, we explore the operational security of IUs in SAS-based DSA systems and propose a novel privacy-preserving SAS-based DSA framework, Suspicion Zone SAS (SZ-SAS), the first such framework which protects against both the scenario of inference attacks in an area with sparsely distributed IUs and the scenario of untrusted or compromised SAS. We then define modifications to the SU inference attack algorithm, which demonstrate the necessity of applying obfuscation to SU query responses. Finally, we evaluate obfuscation schemes which are compatible with SZ-SAS, verifying the effectiveness of such schemes in preventing an SU inference attack. Our results show SZ-SAS is capable of utilizing compatible obfuscation schemes to prevent the SU inference attack, while operating using only homomorphically encrypted IU operation parameters.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Shende, Sachin. "Database-driven hydraulic simulation of canal irrigation networks using object-oriented high-resolution methods." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/14208.

Full text
Abstract:
Canal hydraulic models can be used to understand the hydraulic behaviour of large and complex irrigation networks at low cost. A number of computational hydraulic models were developed and tested in the early 1970s and late 80s. Most were developed using finite difference schemes and procedural programming languages. In spite of the importance of these models, little progress was made on improving the numerical algorithms behind them. Software development efforts were focused more on developing the user interface rather than the core algorithm. This research develops a database-driven, object-oriented hydraulic simulation model for canal irrigation networks using modern high-resolution shock capturing techniques that are capable of handling variety of flow situations which includes trans-critical flow, shock propagation, flows through gated structures and channel networks. The technology platforms were carefully selected by taking into account a multi-user support and possible migration of the new software to a web-based one which integrates a Java-based object-oriented model with a relational database management system that is used to store network configuration and simulation parameters. The developed software is tested using a benchmark test suite formulated jointly by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency (EA). A total of eight tests (seven of them adapted from the DEFRAjEA benchmark suite) were run and results compiled. The developed software has outperformed ISIS, REC-RAS and MIKE 11 in three of the benchmark tests and equally well for the other four. The outcome of this research is therefore a new category in hydraulic simulation software that uses modern shock-capturing methods fully integrated with a configurational relational database that has been fully evaluated and tested.
APA, Harvard, Vancouver, ISO, and other styles
25

Piñera, J. Keneth (Jorge Keneth). "Design of database for automatic example-driven design and assembly of man-made objects." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/83736.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Department of Mechanical Engineering, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 18).
In this project, we have built a database of models that have been designed such that they can be directly fabricated by a casual user. Each of the models in this database has design specifications up to the screw level, and each component has a direct reference to a commercial part from online retailers such as McMaster-Carr and Home Depot. This database was built with the purpose of assisting a data-driven approach to customizable fabrication. This system allows a casual user to create a 3D model input with rough specifications and receive a list of parts that, when assembled, will create the model specified. Using this system and database, we were able to successfully design and fabricate three pieces of furniture and therefore proved the data-driven method approach to be valid.
by J. Keneth Piñera.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
26

Welcker, Laura Joana Maria. "The impact of domain knowledge-driven variable derivation on classifier performance for corporate data mining." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/5009.

Full text
Abstract:
The technological progress in terms of increasing computational power and growing virtual space to collect data offers great potential for businesses to benefit from data mining applications. Data mining can create a competitive advantage for corporations by discovering business relevant information, such as patterns, relationships, and rules. The role of the human user within the data mining process is crucial, which is why the research area of domain knowledge becomes increasingly important. This thesis investigates the impact of domain knowledge-driven variable derivation on classifier performance for corporate data mining. Domain knowledge is defined as methodological, data and business know-how. The thesis investigates the topic from a new perspective by shifting the focus from a one-sided approach, namely a purely analytic or purely theoretical approach towards a target group-oriented (researcher and practitioner) approach which puts the methodological aspect by means of a scientific guideline in the centre of the research. In order to ensure feasibility and practical relevance of the guideline, it is adapted and applied to the requirements of a practical business case. Thus, the thesis examines the topic from both perspectives, a theoretical and practical perspective. Therewith, it overcomes the limitation of a one-sided approach which mostly lacks practical relevance or generalisability of the results. The primary objective of this thesis is to provide a scientific guideline which should enable both practitioners and researchers to move forward the domain knowledge-driven research for variable derivation on a corporate basis. In the theoretical part, a broad overview of the main aspects which are necessary to undertake the research are given, such as the concept of domain knowledge, the data mining task of classification, variable derivation as a subtask of data preparation, and evaluation techniques. This part of the thesis refers to the methodological aspect of domain knowledge. In the practical part, a research design is developed for testing six hypotheses related to domain knowledge-driven variable derivation. The major contribution of the empirical study is concerned with testing the impact of domain knowledge on a real business data set compared to the impact of a standard and randomly derived data set. The business application of the research is a binary classification problem in the domain of an insurance business, which deals with the prediction of damages in legal expenses insurances. Domain knowledge is expressed through deriving the corporate variables by means of the business and data-driven constructive induction strategy. Six variable derivation steps are investigated: normalisation, instance relation, discretisation, categorical encoding, ratio, and multivariate mathematical function. The impact of the domain knowledge is examined by pairwise (with and without derived variables) performance comparisons for five classification techniques (decision trees, naive Bayes, logistic regression, artificial neural networks, k-nearest neighbours). The impact is measured by two classifier performance criteria: sensitivity and area under the ROC-curve (AUC). The McNemar significance test is used to verify the results. Based on the results, two hypotheses are clearly verified and accepted, three hypotheses are partly verified, and one hypothesis had to be rejected on the basis of the case study results. The thesis reveals a significant positive impact of domain knowledge-driven variable derivation on classifier performance for options of all six tested steps. Furthermore, the findings indicate that the classification technique influences the impact of the variable derivation steps, and the bundling of steps has a significant higher performance impact if the variables are derived by using domain knowledge (compared to a non-knowledge application). Finally, the research turns out that an empirical examination of the domain knowledge impact is very complex due to a high level of interaction between the selected research parameters (variable derivation step, classification technique, and performance criteria).
APA, Harvard, Vancouver, ISO, and other styles
27

Gibas, Michael A. "Improving Query Performance through Application-Driven Processing and Retrieval." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218470693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lian, Hongri. "The Design and Development of an Online Database-Driven Peer Assessment Tool Using Division Rule Theory." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51803.

Full text
Abstract:
Peer assessment has been adopted as a means of fair and equitable measurement of individual contributions to group work (Cheng and Warren, 2000; Conway and Kember, 1993; Gatfield, 1999; Goldfinch and Raeside, 1990; Lejk and Wyvill, 2001; Lejk, Wyvill, and Farrow, 1996) and it usually requires a certain mechanism or formula to quantify peer assessment criteria. The problem, however, is that it leads to circumstances where a student can be strategic and be easily able to obtain a higher score by simply giving lower scores to other members within a group. The need is to find a new mechanism and the purpose of this study is to develop an Online Database-Driven Peer Assessment Tool (ODDPAT) using the Division Rule mechanism as its core computational algorithm. This developmental study used modified Collaborative Create-Adapt-Generalize (CAG) model (Hicks, Potter, Snider, and Holmes, 2004) as its design and developmental framework. The process of design, development, and evaluation of the entire project was documented. Three experts were interviewed and detailed analysis of data was discussed. Finally, recommendations were made for its implementation and future research.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Logofatu, Cristina. "Improving communication in a transportation company by using a Web page." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2520.

Full text
Abstract:
The Internet has become a very powerful tool in improving communication, making it easier, more convenient, and faster to access or exchange information. This project takes advantage of the strengths the Internet provides by improving communication by developing a web site for a transportation company.
APA, Harvard, Vancouver, ISO, and other styles
30

Paventhan, Arumugam. "Grid approaches to data-driven scientific and engineering workflows." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/49926/.

Full text
Abstract:
Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support near-realtime data movement, high-performance processing and effective data management. In this context, we consider two related technology areas: Grid computing which is fast emerging as an accepted way forward for the large-scale, distributed and multi-institutional resource sharing and Database systems whose capabilities are undergoing continuous change providing new possibilities for scientific data management in Grid. In this thesis, we look into the challenging requirements while integrating data-driven scientific and engineering experiment workflows onto Grid. We consider wind tunnels that house multiple experiments with differing characteristics, as an application exemplar. This thesis contributes two approaches while attempting to tackle some of the following questions: How to allow domain-specific workflow activity development by hiding the underlying complexity? Can new experiments be added to the system easily? How can the overall turnaround time be reduced by an end-to-end experimental workflow support? In the first approach, we show how experiment-specific workflows can help accelerate application development using Grid services. This has been realized with the development of MyCoG, the first Commodity Grid toolkit for .NET supporting multi-language programmability. In the second , we present an alternative approach based on federated database services to realize an end-to-end experimental workflow. We show with the help of a real-world example, how database services can be building blocks for scientific and engineering workflows.
APA, Harvard, Vancouver, ISO, and other styles
31

Fanan, Anwar Mohamed Ali. "The relationship between choice of spectrum sensing device and secondary-user intrusion in database-driven cognitive radio systems." Thesis, University of Hull, 2018. http://hydra.hull.ac.uk/resources/hull:16598.

Full text
Abstract:
As radios in future wireless systems become more flexible and reconfigurable whilst available radio spectrum becomes scarce, the possibility of using TV White Space devices (WSD) as secondary users in the TV Broadcast Bands (without causing harmful interference to licensed incumbents) becomes ever more attractive. Cognitive Radio encompasses a number of technologies which enable adaptive self-programming of systems at different levels to provide more effective use of the increasingly congested radio spectrum. Cognitive Radio has the potential to use spectrum allocated to TV services, which is not actually being used by these services, without causing disruptive interference to licensed users by using channel selection aided by use of appropriate propagation modelling in TV White Spaces. The main purpose of this thesis is to explore the potential of the Cognitive Radio concept to provide additional bandwidth and improved efficiency to help accelerate the development and acceptance of Cognitive Radio technology. Specifically, firstly: three main classes of spectrum sensing techniques (Energy Detection, Matched Filtering and Cyclostationary Feature Detection) have compare in terms of time and spectrum resources consumed, required prior knowledge and complexity, ranking the three classes according to accuracy and performance. Secondly, investigate spectrum occupancy of the UHF TV band in the frequency range from 470 to 862 MHz by undertaking spectrum occupancy measurements in different locations around the Hull area in the UK, using two different receiver devices; a low cost Software-Defined Radio device and a laboratory-quality spectrum analyser. Thirdly, investigate the best propagation model among three propagation models (Extended-Hata, Davidson-Hata and Egli) for use in the TV band, whilst also finding the optimum terrain data resolution to use (1000, 100 or 30 m). it compares modelled results with the previously-mentioned practical measurements and then describe how such models can be integrated into a database-driven tool for Cognitive Radio channel selection within the TV White Space environment. Fourthly, create a flexible simulation system for creating a TV White Space database by using different propagation models. Finally, design a flexible system which uses a combination of Geolocation Database and Spectrum Sensing in the TV band, comparing the performance of two spectrum analysers (Agilent E4407B and Agilent EXA N9010A) with that of a low cost Software-Defined Radio in the real radio environment. The results shows that white space devices can be designed using SDRs based on the Realtek RTL2832U chip (RTL-SDR), combined with a geolocation database for identifying the primary user in the specific location in a cost-effective manner. Furthermore it is shown that improving the sensitivity of RTL-SDR will affect the accuracy and performance of the WSD.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Chen-Wei. "Model-driven development of information systems." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:9d70647c-e1b6-4cbb-b88c-707f09431db6.

Full text
Abstract:
The research presented in this thesis is aimed at developing reliable information systems through the application of model-driven and formal techniques. These are techniques in which a precise, formal model of system behaviour is exploited as source code. As such a model may be more abstract, and more concise, than source code written in a conventional programming language, it should be easier and more economical to create, to analyse, and to change. The quality of the model of the system can be ensured through certain kinds of formal analysis and fixed accordingly if necessary. Most valuably, the model serves as the basis for the automated generation or configuration of a working system. This thesis provides four research contributions. The first involves the analysis of a proposed modelling language targeted at the model-driven development of information systems. Logical properties of the language are derived, as are properties of its compiled form---a guarded substitution notation. The second involves the extension of this language, and its semantics, to permit the description of workflows on information systems. Workflows described in this way may be analysed to determine, in advance of execution, the extent to which their concurrent execution may introduce the possibility of deadlock or blocking: a condition that, in this context, is synonymous with a failure to achieve the specified outcome. The third contribution concerns the validation of models written in this language by adapting existing techniques of software testing to the analysis of design models. A methodology is presented for checking model consistency, on the basis of a generated test suite, against the intended requirements. The fourth and final contribution is the presentation of an implementation strategy for the language, targeted at standard, relational databases, and an argument for its correctness, based on a simple, set-theoretic semantics for structure and operations.
APA, Harvard, Vancouver, ISO, and other styles
33

Goad, Kenneth G. "Integration of an X-Y prober with CAD driven database and test generation software for the testing of printed circuit boards." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45664.

Full text
Abstract:

Guided probe testing of printed circuit boards is a technique that has been well developed by automatic test equipment manufacturers to pinpoint faults. Though the guided probe technique of testing printed circuit boards is a process capable of providing high diagnostic resolution, the technique is inefficient when it is performed manually. The throughput of board testing is bottlenecked because of the time required for an operator to manually move a probe to a specific location on the board under test in order to measure a stimulated response. Integration of a CAD driven X-Y prober is a way to automate guided probe testing of printed circuit boards.

This research integrates a personal computer based automated guided probe testing system. A CAD tool provides geometric and circuit connectivity information. Automatic test generation, CAD information post processing, and automatic guided probe testing software tools are developed to implement the system. The ultimate result is increased circuit board test station throughput. This makes the circuit board manufacturing process more efficient and less expensive while maintaining high quality products through more extensive testing.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
34

Brown, Mary Erin. "Data-Driven Decision Making as a Tool to Improve Software Development Productivity." ScholarWorks, 2011. https://scholarworks.waldenu.edu/dissertations/1075.

Full text
Abstract:
The worldwide software project failure rate, based on a survey of information technology software manager's view of user satisfaction, product quality, and staff productivity, is estimated to be between 24% and 36% and software project success has not kept pace with the advances in hardware. The problem addressed by this study was the limited information about software managers' experiences with data-driven decision making (DDD) in agile software organizations as a tool to improve software development productivity. The purpose of this phenomenological study was to explore how agile software managers view DDD as a tool to improve software development productivity and to understand how agile software development organizations may use DDD now and in the future to improve software development productivity. Research questions asked about software managers', project managers', and agile coaches' lived experiences with DDD via a set of interview questions. The conceptual framework for the research was based on the 3 critical dimensions of software organization productivity improvement: people, process, and tools, which were defined by the Software Engineering Institute's Capability Maturity Model Integrated published in 2010. Organizations focus on processes to align the people, procedures and methods, and tools and equipment to improve productivity. Positive social change could result from a better understanding of DDD in an agile software development environment; this increased understanding of DDD could enable organizations to create more products, offer more jobs, and better compete in a global economy.
APA, Harvard, Vancouver, ISO, and other styles
35

Albhbah, Atia M. "Dynamic web forms development using RuleML. Building a framework using metadata driven rules to control Web forms generation and appearance." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5719.

Full text
Abstract:
Web forms development for Web based applications is often expensive, laborious, error-prone, time consuming and requires a lot of effort. Web forms are used by many different people with different backgrounds and a lot of demands. There is a very high cost associated with the need to update the Web application systems to achieve these demands. A wide range of techniques and ideas to automate the generation of Web forms exist. These techniques and ideas however, are not capable of generating the most dynamic behaviour of form elements, and make Insufficient use of database metadata to control Web forms¿ generation and appearance. In this thesis different techniques are proposed that use RuleML and database metadata to build rulebases to improve the automatic and dynamic generation of Web forms. First this thesis proposes the use of a RuleML format rulebase using Reaction RuleML that can be used to support the development of automated Web interfaces. Database metadata can be extracted from system catalogue tables in typical relational database systems, and used in conjunction with the rulebase to produce appropriate Web form elements. Results show that this mechanism successfully insulates application logic from code and suggests that Abstract iii the method can be extended from generic metadata rules to more domain specific rules. Second it proposes the use of common sense rules and domain specific rules rulebases using Reaction RuleML format in conjunction with database metadata rules to extend support for the development of automated Web forms. Third it proposes the use of rules that involve code to implement more semantics for Web forms. Separation between content, logic and presentation of Web applications has become an important issue for faster development and easy maintenance. Just as CSS applied on the client side to control the overall presentation of Web applications, a set of rules can give a similar consistency to the appearance and operation of any set of forms that interact with the same database. We develop rules to order Web form elements and query forms using Reaction RuleML format in conjunction with database metadata rules. The results show the potential of RuleML formats for representing database structural and active semantics. Fourth it proposes the use of a RuleML based approach to provide more support for greater semantics for example advanced domain support even when this is not a DBMS feature. The approach is to specify most of the semantics associated with data stored in RDBMS, to overcome some RDBMSs limitations. RuleML could be used to represent database metadata as an external format.
APA, Harvard, Vancouver, ISO, and other styles
36

Yilmaz, Harun. "Identification of Academic Program Strengths and Weaknesses through Use of a Prototype Systematic Tool." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26774.

Full text
Abstract:
Because of the rapid development of the use of computers in education, as well as the introduction of the World Wide Web (WWW), a growing number of web-based educational applications/tools have been developed and implemented to help both educators and administrators in the field of education. In order to assist program directors and faculty members in determining whether or not there is a gap between the current situation of the program and the desired situation of the program and whether or not program objectives meet accreditation standards, there is a need for a tool that works effectively and efficiently. However, literature review showed that there is no automated tool specifically used for determining strengths and weaknesses of an academic program, and there is a lack of research in this area. In Chapter 1, the authorâ s intent is to discuss the purpose behind this developmental research and to provide a literature review that serves as the basis for the design of such an automated tool. This review investigates the following issues: objectives related to programs and courses, taxonomies of educational objectives, curriculum evaluation, accreditation and standards, automated tools, and a brief collaborative create-adapt-generalize model. Chapter 2 discusses the design and development of the automated tool as well as methodology focusing on the instructional design model and its steps. Chapter 3 presents the results of the expert review process and possible solutions for the problems identified during the expert review process. Also the Appendices include the documentation used during the expert review process.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Albhbah, Atia Mahmod. "Dynamic web forms development using RuleML : building a framework using metadata driven rules to control Web forms generation and appearance." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5719.

Full text
Abstract:
Web forms development for Web based applications is often expensive, laborious, error-prone, time consuming and requires a lot of effort. Web forms are used by many different people with different backgrounds and a lot of demands. There is a very high cost associated with the need to update the Web application systems to achieve these demands. A wide range of techniques and ideas to automate the generation of Web forms exist. These techniques and ideas however, are not capable of generating the most dynamic behaviour of form elements, and make Insufficient use of database metadata to control Web forms' generation and appearance. In this thesis different techniques are proposed that use RuleML and database metadata to build rulebases to improve the automatic and dynamic generation of Web forms. First this thesis proposes the use of a RuleML format rulebase using Reaction RuleML that can be used to support the development of automated Web interfaces. Database metadata can be extracted from system catalogue tables in typical relational database systems, and used in conjunction with the rulebase to produce appropriate Web form elements. Results show that this mechanism successfully insulates application logic from code and suggests that Abstract iii the method can be extended from generic metadata rules to more domain specific rules. Second it proposes the use of common sense rules and domain specific rules rulebases using Reaction RuleML format in conjunction with database metadata rules to extend support for the development of automated Web forms. Third it proposes the use of rules that involve code to implement more semantics for Web forms. Separation between content, logic and presentation of Web applications has become an important issue for faster development and easy maintenance. Just as CSS applied on the client side to control the overall presentation of Web applications, a set of rules can give a similar consistency to the appearance and operation of any set of forms that interact with the same database. We develop rules to order Web form elements and query forms using Reaction RuleML format in conjunction with database metadata rules. The results show the potential of RuleML formats for representing database structural and active semantics. Fourth it proposes the use of a RuleML based approach to provide more support for greater semantics for example advanced domain support even when this is not a DBMS feature. The approach is to specify most of the semantics associated with data stored in RDBMS, to overcome some RDBMSs limitations. RuleML could be used to represent database metadata as an external format.
APA, Harvard, Vancouver, ISO, and other styles
38

Paffumi, Elena, Gennaro Michele De, and Giorgio Martini. "Alternative utility factor versus the SAE J2841 standard method for PHEV and BEV applications." Elsevier, 2018. https://publish.fid-move.qucosa.de/id/qucosa%3A73240.

Full text
Abstract:
This article explores the potential of using real-world driving patterns to derive PHEV and BEV utility factors and evaluates how different travel and recharging behaviours affect the calculation of the standard SAE J2841 utility factor. The study relies on six datasets of driving data collected monitoring 508,607 conventional fuel vehicles in six European areas and a dataset of synthetic data from 700,000 vehicles in a seventh European area. Sources representing the actual driving behaviour of PHEV together with the WLTP European utility factor are adopted as term of comparison. The results show that different datasets of driving data can yield to different estimates of the utility factor. The SAE J2841 standard method results to be representative of a large variety of behaviours of PHEVs and BEVs' drivers, characterised by a fully-charged battery at the beginning of the trip sequence, thus being representative for fuel economy and emission estimates in the early phase deployment of EVs, charged at home and overnight. However the results show that the SAE J2841 utility factor might need to be revised to account for more complex future scenarios, such as necessity-driven recharge behaviour with less than one recharge per day or a fully deployed recharge infrastructure with more than one recharge per day.
APA, Harvard, Vancouver, ISO, and other styles
39

Knutsson, Tor. "Implementation and evaluation of data persistence tools for temporal versioned data models." Thesis, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-19979.

Full text
Abstract:

The purpose of this thesis was to investigate different concepts and tools which could support the development of a middleware which persists a temporal and versioned relational data model in an enterprise environment. Further requirements for the target application was that changes to the data model had to be facilitated, so that a small change to the model would not result in changes in several files and application layers. Other requirements include permissioning and audit tracing. In the thesis the reader is presented with a comparison of a set of tools for enterprise development and object/relational mapping. One of the tools, a code generator, is chosen as a good candidate to match the requirements of the project. An implementation is presented, where the chosen tool is used. An XML-based language which is used to define a data model and to provide input data for the tool is presented. Other concepts concerning the implementation is then described in detail. Finally, the author discusses alternative solutions and future improvements.

APA, Harvard, Vancouver, ISO, and other styles
40

SILVA, Edson Alves da. "Um catálogo de regras para transformação automática de esquemas EER em código SQL-Relacional: uma visão MDD com foco em restrições estruturais não triviais." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/18303.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-13T15:18:50Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) [dsc] Edson Alves v.1.5.6.pdf: 4201919 bytes, checksum: c682b493376c27a9896e5215c62283a1 (MD5)
Made available in DSpace on 2017-02-13T15:18:50Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) [dsc] Edson Alves v.1.5.6.pdf: 4201919 bytes, checksum: c682b493376c27a9896e5215c62283a1 (MD5) Previous issue date: 2015-03-02
CNPq
Model Driven Development (MDD) é um paradigma para geração automática de código executável que utiliza modelos como o seu artefato primário. No escopo de Banco de Dados, apesar das regras para transformação de esquemas Enhanced Entity Relationship (EER) em código da Structured Query Language (SQL)-Relacional já terem sido amplamente exploradas na literatura, não se encontrou um trabalho que ao mesmo tempo especifique tradutores MDD capazes de transformar, automaticamente, esquemas EER em códigos SQL-Relacional e aborde restrições como: Participação em Relacionamento, Disjunção e Completude em Herança ou Categoria são transformadas em estruturas SQL-Relacional. Neste contexto, visando dar uma contribuição às limitações mencionadas, esta dissertação apresenta duas macros contribuições: 1) um Catálogo de regras para transformar um esquema EER em um esquema Relacional e este em código SQL; e 2) um algoritmo que especifica uma ordem correta para a execução automática destas regras. De modo a mostrar a viabilidade e aplicação prática deste trabalho, o Catálogo de regras de transformação e o algoritmo para automatização do Catálogo são codificados na linguagem Query/View/Transformation-Relations (QVT-R) e implementados na ferramenta EERCASE. A avaliação do trabalho foi feita a partir da transformação de esquemas EER (não triviais) em códigos SQLRelacional, os quais são conferidos por especialistas de Banco de Dados. Por fim, comparando o trabalho proposto com os trabalhos relacionados investigados, constatou-se que o trabalho desta dissertação avança o estado da arte, pois é o único que é baseado em MDD e garante que as restrições de Participação em Relacionamento, Disjunção e Completude em Herança ou Categoria sejam automaticamente geradas para serem garantidas diretamente pelo Sistema de Gerenciamento de Banco de Dados.
Model Driven Development (MDD) is a paradigm for automatic generation of executable code that uses models as its primary artifact. In the database scope, despite the rules for transformation of Enhanced Entity Relationship (EER) schemas in code of Structured Query Language (SQL)-Relational have already been widely explored in the literature, we did not find a work that, at the same time, specifies MDD translators capable of transforming, automatically, EER schemas in SQL-Relational codes and addresses restrictions such as: Participation in Relationship, Disjunction and Completeness in Inheritance or Category are transformed into SQL-relational structures. In this context, in order to contribute for the mentioned limitations, this dissertation presents two macro contributions: 1) a rule Catalog to transform an EER schema into a Relational schema and this SQL code; and 2) an algorithm that specifies a correct order for the automatic enforcement of these rules. In order to show the feasibility and practical application of this work, the Catalog of transformation rules and the algorithm for Catalog automation are encoded in Query/View/TransformationRelations (QVT-R) language and implemented in EERCASE tool. The evaluation of the work was made from the processing of EER schemas (nontrivial) in SQL-Relational codes, which are conferred by database experts. Finally, comparing the proposed work with the related work investigated, it was found that the proposed work advances the state of the art, as it is the only one that is based on MDD and ensures that the restrictions on Participation in Relationship, Disjunction in Inheritance and Completeness in Inheritance or Category are guaranteed by the Database Management System.
APA, Harvard, Vancouver, ISO, and other styles
41

Beggiato, Matthias. "Changes in motivational and higher level cognitive processes when interacting with in-vehicle automation." Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-167333.

Full text
Abstract:
Many functions that at one time could only be performed by humans can nowadays be carried out by machines. Automation impacts many areas of life including work, home, communication and mobility. In the driving context, in-vehicle automation is considered to provide solutions for environmental, economic, safety and societal challenges. However, automation changes the driving task and the human-machine interaction. Thus, the expected benefit of in-vehicle automation can be undermined by changes in drivers’ behaviour, i.e. behavioural adaptation. This PhD project focuses on motivational as well as higher cognitive processes underlying behavioural adaptation when interacting with in-vehicle automation. Motivational processes include the development of trust and acceptance, whereas higher cognitive processes comprise the learning process as well as the development of mental models and Situation Awareness (SA). As an example for in-vehicle automation, the advanced driver assistance system Adaptive Cruise Control (ACC) was investigated. ACC automates speed and distance control by maintaining a constant set cruising speed and automatically adjusting vehicle’s velocity in order to provide a specified distance to the preceding vehicle. However, due to sensor limitations, not every situation can be handled by the system and therefore driver intervention is required. Trust, acceptance and an appropriate mental model of the system functionality are considered key variables for adequate use and appropriate SA. To systematically investigate changes in motivational and higher cognitive processes, a driving simulator as well as an on-road study were carried out. Both of the studies were conducted using a repeated-measures design, taking into account the process character, i.e. changes over time. The main focus was on the development of trust, acceptance and the mental model of novice users when interacting with ACC. By now, only few studies have attempted to assess changes in higher level cognitive processes, due to methodological difficulties posed by the dynamic task of driving. Therefore, this PhD project aimed at the elaboration and validation of innovative methods for assessing higher cognitive processes, with an emphasis on SA and mental models. In addition, a new approach for analyzing big and heterogeneous data in social science was developed, based on the use of relational databases. The driving simulator study investigated the effect of divergent initial mental models of ACC (i.e., varying according to correctness) on trust, acceptance and mental model evolvement. A longitudinal study design was applied, using a two-way (3×3) repeated measures mixed design with a matched sample of 51 subjects. Three experimental groups received (1) a correct ACC description, (2) an incomplete and idealised account omitting potential problems, and (3) an incorrect description including non-occurring problems. All subjects drove a 56-km track of highway with an identical ACC system, three times, and within a period of 6 weeks. Results showed that after using the system, participants’ mental model of ACC converged towards the profile of the correct group. Non-experienced problems tended to disappear from the mental model network when they were not activated by experience. Trust and acceptance grew steadily for the correct condition. The same trend was observed for the group with non-occurring problems, starting from a lower initial level. Omitted problems in the incomplete group led to a constant decrease in trust and acceptance without recovery. This indicates that automation failures do not negatively affect trust and acceptance if they are known beforehand. During each drive, participants continuously completed a visual secondary task, the Surrogate Reference Task (SURT). The frequency of task completion was used as objective online-measure for SA, based on the principle that situationally aware driver would reduce the engagement in the secondary task if they expect potentially critical situations. Results showed that correctly informed drivers were aware of potential system limitations and reduced their engagement in the secondary task when such situations arose. Participants with no information about limitations became only aware after first encounter and reduced secondary task engagement in corresponding situations during subsequent trials. However, trust and acceptance in the system declined over time due to the unexpected failures. Non occurring limitations tended to drop from the mental model and resulted in reduced SA already in the second trial. The on-road study investigated the learning process, as well as the development of trust, acceptance and the mental model for interacting with ACC in real conditions. Research questions aimed to model the learning process in mathematical/statistical terms, examine moments and conditions when these processes stabilize, and assess how experience changes the mental model of the system. A sample of fifteen drivers without ACC experience drove a test vehicle with ACC ten consecutive times on the same route within a 2-month period. In contrast to the driving simulator study, all participants were fully trained in ACC functionality by reading the owner’s manual in the beginning. Results showed that learning, as well as the development of acceptance and trust in ACC follows the power law of learning, in case of comprehensive prior information on system limitations. Thus, the major part of the learning process occurred during the first interaction with the system and support in explaining the systems abilities (e.g. by tutoring systems) should therefore primarily be given during this first stage. All processes stabilized at a relatively high level after the fifth session, which corresponds to 185 km or 3.5 hours of driving. No decline was observable with ongoing system experience. However, in line with the findings from the simulator study, limitations that are not experienced tended to disappear from the mental model if they were not activated by experience. With regard to the validation of the developed methods for assessing mental models and SA, results are encouraging. The studies show that the mental model questionnaire is able to provide insights into the construction of mental models and the development over time. Likewise, the implicit measurement approach to assess SA online in the driving simulator is sensitive to user’s awareness of potentially critical situations. In terms of content, the results of the studies prove the enduring relevance of the initial mental model for the learning process, SA, as well as the development of trust, acceptance and a realistic mental model about automation capabilities and limitations. Given the importance of the initial mental model it is recommended that studies on system trust and acceptance should include, and attempt to control, users’ initial mental model of system functionality. Although the results showed that also incorrect and incomplete initial mental models converged by experience towards a realistic appreciation of system functionality, the more cognitive effort needed to update the mental model, the lower trust and acceptance. Providing an idealised description, which omits potential problems, only leads to temporarily higher trust and acceptance in the beginning. The experience of unexpected limitations results in a steady decrease in trust and acceptance over time. A trial-and-error strategy for in-vehicle automation use, without accompanying information, is therefore considered insufficient for developing stable trust and acceptance. If the mental model matches experience, trust and acceptance grow steadily following the power law of learning – regardless of the experience of system limitations. Provided that such events are known in advance, they will not cause a decrease in trust and acceptance over time. Even over-information about potential problems lowers trust and acceptance only in the beginning, and not in the long run. Potential problems should therefore not be concealed in over-idealised system descriptions; the more information given, the better, in the long run. However, limitations that are not experienced tend to disappear from the mental model. Therefore, it is recommended that users be periodically reminded of system limitations to make sure that corresponding knowledge becomes re-activated. Intelligent tutoring systems incorporated in automated systems could provide a solution. In the driving context, periodic reminders about system limitations could be shown via the multifunction displays integrated in most modern cars. Tutoring systems could also be used to remind the driver of the presence of specific in-vehicle automation systems and reveal their benefits
Viele Aufgaben, die ehemals von Menschen ausgeführt wurden, werden heute von Maschinen übernommen. Dieser Prozess der Automatisierung betrifft viele Lebensbereiche von Arbeit, Wohnen, Kommunikation bis hin zur Mobilität. Im Bereich des Individualverkehrs wird die Automatisierung von Fahrzeugen als Möglichkeit gesehen, zukünftigen Herausforderungen wirtschaftlicher, gesellschaftlicher und umweltpolitischer Art zu begegnen. Allerdings verändert Automatisierung die Fahraufgabe und die Mensch-Technik Interaktion im Fahrzeug. Daher können beispielsweise erwartete Sicherheitsgewinne automatisch agierender Assistenzsysteme durch Veränderungen im Verhalten des Fahrers geschmälert werden, was als Verhaltensanpassung (behavioural adaptation) bezeichnet wird. Dieses Dissertationsprojekt untersucht motivationale und höhere kognitive Prozesse, die Verhaltensanpassungen im Umgang mit automatisierten Fahrerassistenzsystemen zugrunde liegen. Motivationale Prozesse beinhalten die Entwicklung von Akzeptanz und Vertrauen in das System, unter höheren kognitiven Prozessen werden Lernprozesse sowie die Entwicklung von mentalen Modellen des Systems und Situationsbewusstsein (Situation Awareness) verstanden. Im Fokus der Untersuchungen steht das Fahrerassistenzsystem Adaptive Cruise Control (ACC) als ein Beispiel für Automatisierung im Fahrzeug. ACC regelt automatisch die Geschwindigkeit des Fahrzeugs, indem bei freier Fahrbahn eine eingestellte Wunschgeschwindigkeit und bei einem Vorausfahrer automatisch ein eingestellter Abstand eingehalten wird. Allerdings kann ACC aufgrund von Einschränkungen der Sensorik nicht jede Situation bewältigen, weshalb der Fahrer übernehmen muss. Für diesen Interaktionsprozess spielen Vertrauen, Akzeptanz und das mentale Modell der Systemfunktionalität eine Schlüsselrolle, um einen sicheren Umgang mit dem System und ein adäquates Situationsbewusstsein zu entwickeln. Zur systematischen Erforschung dieser motivationalen und kognitiven Prozesse wurden eine Fahrsimulatorstudie und ein Versuch im Realverkehr durchgeführt. Beide Studien wurden im Messwiederholungsdesign angelegt, um dem Prozesscharakter gerecht werden und Veränderungen über die Zeit erfassen zu können. Die Entwicklung von Vertrauen, Akzeptanz und mentalem Modell in der Interaktion mit ACC war zentraler Forschungsgegenstand beider Studien. Bislang gibt es wenige Studien, die kognitive Prozesse im Kontext der Fahrzeugführung untersucht haben, unter anderem auch wegen methodischer Schwierigkeiten in diesem dynamischen Umfeld. Daher war es ebenfalls Teil dieses Dissertationsprojekts, neue Methoden zur Erfassung höherer kognitiver Prozesse in dieser Domäne zu entwickeln, mit Fokus auf mentalen Modellen und Situationsbewusstsein. Darüber hinaus wurde auch ein neuer Ansatz für die Analyse großer und heterogener Datenmengen im sozialwissenschaftlichen Bereich entwickelt, basierend auf dem Einsatz relationaler Datenbanken. Ziel der der Fahrsimulatorstudie war die systematische Erforschung des Effekts von unterschiedlich korrekten initialen mentalen Modellen von ACC auf die weitere Entwicklung des mentalen Modells, Vertrauen und Akzeptanz des Systems. Eine Stichprobe von insgesamt 51 Probanden nahm an der Studie teil; der Versuch wurde als zweifaktorielles (3x3) gemischtes Messwiederholungsdesign konzipiert. Die 3 parallelisierten Versuchsgruppen zu je 17 Personen erhielten (1) eine korrekte Beschreibung des ACC, (2) eine idealisierte Beschreibung unter Auslassung auftretender Systemprobleme und (3) eine überkritische Beschreibung mit zusätzlichen Hinweisen auf Systemprobleme, die nie auftraten. Alle Teilnehmer befuhren insgesamt dreimal im Zeitraum von sechs Wochen dieselbe 56 km lange Autobahnstrecke im Fahrsimulator mit identischem ACC-System. Mit zunehmendem Einsatz des ACC zeigte sich im anfänglich divergierenden mentalen Modell zwischen den Gruppen eine Entwicklung hin zum mentalen Modell der korrekt informierten Gruppe. Nicht erfahrene Systemprobleme tendierten dazu, im mentalen Modell zu verblassen, wenn sie nicht durch Erfahrung reaktiviert wurden. Vertrauen und Akzeptanz stiegen stetig in der korrekt informierten Gruppe. Dieselbe Entwicklung zeigte sich auch in der überkritisch informierten Gruppe, wobei Vertrauen und Akzeptanz anfänglich niedriger waren als in der Bedingung mit korrekter Information. Verschwiegene Systemprobleme führten zu einer konstanten Abnahme von Akzeptanz und Vertrauen ohne Erholung in der Gruppe mit idealisierter Beschreibung. Diese Resultate lassen darauf schließen, dass Probleme automatisierter Systeme sich nicht zwingend negativ auf Vertrauen und Akzeptanz auswirken, sofern sie vorab bekannt sind. Bei jeder Fahrt führten die Versuchsteilnehmer zudem kontinuierlich eine visuell beanspruchende Zweitaufgabe aus, die Surrogate Reference Task (SURT). Die Frequenz der Zweitaufgabenbearbeitung diente als objektives Echtzeitmaß für das Situationsbewusstsein, basierend auf dem Ansatz, dass situationsbewusste Fahrer die Zuwendung zur Zweitaufgabe reduzieren wenn sie potentiell kritische Situationen erwarten. Die Ergebnisse zeigten, dass die korrekt informierten Fahrer sich potentiell kritischer Situationen mit möglichen Systemproblemen bewusst waren und schon im Vorfeld der Entstehung die Zweitaufgabenbearbeitung reduzierten. Teilnehmer ohne Informationen zu auftretenden Systemproblemen wurden sich solcher Situationen erst nach dem ersten Auftreten bewusst und reduzierten in entsprechenden Szenarien der Folgefahrten die Zweitaufgabenbearbeitung. Allerdings sanken Vertrauen und Akzeptanz des Systems aufgrund der unerwarteten Probleme. Erwartete, aber nicht auftretende Systemprobleme tendierten dazu, im mentalen Modell des Systems zu verblassen und resultierten in vermindertem Situationsbewusstsein bereits in der zweiten Fahrt. Im Versuch unter Realbedingungen wurden der Lernprozesses sowie die Entwicklung des mentalen Modells, Vertrauen und Akzeptanz von ACC im Realverkehr erforscht. Ziele waren die statistisch/mathematische Modellierung des Lernprozesses, die Bestimmung von Zeitpunkten der Stabilisierung dieser Prozesse und wie sich reale Systemerfahrung auf das mentale Modell von ACC auswirkt. 15 Versuchsteilnehmer ohne ACC-Erfahrung fuhren ein Serienfahrzeug mit ACC insgesamt 10-mal auf der gleichen Strecke in einem Zeitraum von 2 Monaten. Im Unterschied zur Fahrsimulatorstudie waren alle Teilnehmer korrekt über die ACC-Funktionen und Funktionsgrenzen informiert durch Lesen der entsprechenden Abschnitte im Fahrzeughandbuch am Beginn der Studie. Die Ergebnisse zeigten, dass der Lernprozess sowie die Entwicklung von Akzeptanz und Vertrauen einer klassischen Lernkurve folgen – unter der Bedingung umfassender vorheriger Information zu Systemgrenzen. Der größte Lernfortschritt ist am Beginn der Interaktion mit dem System sichtbar und daher sollten Hilfen (z.B. durch intelligente Tutorsysteme) in erster Linie zu diesem Zeitpunkt gegeben werden. Eine Stabilisierung aller Prozesse zeigte sich nach der fünften Fahrt, was einer Fahrstrecke von rund 185 km oder 3,5 Stunden Fahrzeit entspricht. Es zeigten sich keine Einbrüche in Akzeptanz, Vertrauen bzw. dem Lernprozess durch die gemachten Erfahrungen im Straßenverkehr. Allerdings zeigte sich – analog zur Fahrsimulatorstudie – auch in der Realfahrstudie ein Verblassen von nicht erfahrenen Systemgrenzen im mentalen Modell, wenn diese nicht durch Erfahrungen aktiviert wurden. Im Hinblick auf die Validierung der neu entwickelten Methoden zur Erfassung von mentalen Modellen und Situationsbewusstsein sind die Resultate vielversprechend. Die Studien zeigen, dass mit dem entwickelten Fragebogenansatz zur Quantifizierung des mentalen Modells Einblicke in Aufbau und Entwicklung mentaler Modelle gegeben werden können. Der implizite Echtzeit-Messansatz für Situationsbewusstsein im Fahrsimulator zeigt sich ebenfalls sensitiv in der Erfassung des Bewusstseins von Fahrern für potentiell kritische Situationen. Inhaltlich zeigen die Studien die nachhaltige Relevanz des initialen mentalen Modells für den Lernprozess sowie die Entwicklung von Situationsbewusstsein, Akzeptanz, Vertrauen und die weitere Ausformung eines realistischen mentalen Modells der Möglichkeiten und Grenzen automatisierter Systeme. Aufgrund dieser Relevanz wird die Einbindung und Kontrolle des initialen mentalen Modells in Studien zu automatisierten Systemen unbedingt empfohlen. Die Ergebnisse zeigen zwar, dass sich auch unvollständige bzw. falsche mentale Modelle durch Erfahrungslernen hin zu einer realistischen Einschätzung der Systemmöglichkeiten und -grenzen verändern, allerdings um den Preis sinkenden Vertrauens und abnehmender Akzeptanz. Idealisierte Systembeschreibungen ohne Hinweise auf mögliche Systemprobleme bringen nur anfänglich etwas höheres Vertrauen und Akzeptanz. Das Erleben unerwarteter Probleme führt zu einem stetigen Abfall dieser motivationalen Faktoren über die Zeit. Ein alleiniges Versuchs-Irrtums-Lernen für den Umgang mit automatisierter Assistenz im Fahrzeug ohne zusätzliche Information wird daher als nicht ausreichend für die Entwicklung stabilen Vertrauens und stabiler Akzeptanz betrachtet. Wenn das initiale mentale Modell den Erfahrungen entspricht, entwickeln sich Akzeptanz und Vertrauen gemäß einer klassischen Lernkurve – trotz erlebter Systemgrenzen. Sind diese potentiellen Probleme vorher bekannt, führen sie nicht zwingend zu einer Reduktion von Vertrauen und Akzeptanz. Auch zusätzliche überkritische Information vermindert Vertrauen und Akzeptanz nur am Beginn, aber nicht langfristig. Daher sollen potentielle Probleme in automatisierten Systemen nicht in idealisierten Beschreibungen verschwiegen werden – je präzisere Information gegeben wird, desto besser im langfristigen Verlauf. Allerdings tendieren nicht erfahrene Systemgrenzen zum Verblassen im mentalen Modell. Daher wird empfohlen, Nutzer regelmäßig an diese Systemgrenzen zu erinnern um die entsprechenden Facetten des mentalen Modells zu reaktivieren. In automatisierten Systemen integrierte intelligente Tutorsysteme könnten dafür eine Lösung bieten. Im Fahrzeugbereich könnten solche periodischen Erinnerungen an Systemgrenzen in Multifunktionsdisplays angezeigt werden, die mittlerweile in vielen modernen Fahrzeugen integriert sind. Diese Tutorsysteme können darüber hinaus auch auf die Präsenz eingebauter automatisierter Systeme hinweisen und deren Vorteile aufzeigen
APA, Harvard, Vancouver, ISO, and other styles
42

Arditi, Valentina. "Sviluppo di un software con interfacce dinamiche per il monitoraggio e la configurazione di dispositivi medici programmabili." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Automazione, tecnologia e digitalizzazione sono tematiche sempre più sentite nel contesto della cosiddetta Sanità 4.0. Questo è lo scenario in cui si colloca il software prodotto nel corso del lavoro di tesi svolto presso il Centro Protesi Inail di Vigorso di Budrio e presentato in questo elaborato. Inail ProMoS2 (Programming-Monitoring Shapeshifter Software) è stato disegnato per essere general purpose rispetto alle operazioni di monitoraggio e programmazione di dispositivi medici attivi che, considerato l’alto grado di variabilità in ambito biologico, richiedono una personalizzazione rispetto al paziente che li utilizza. Grazie a questo prodotto il tecnico sanitario dovrà essere in grado di configurare e riconfigurare il dispositivo ad ogni necessità, nel corso di frequenti e ripetute visite ambulatoriali: da qui l’esigenza di creare una interfaccia user-friendly, che mantenesse nascosta la complessità elettronica e informatica del sistema. All’avvio, il software esegue la ricerca autonoma del dispositivo e instaura con esso una comunicazione di tipo wireless; interagisce inoltre con un database, creato per tenere traccia di pazienti, dispositivi e relative configurazioni dell’elettronica. La peculiarità di Inail ProMoS2 è la capacità di generalizzazione: esso è in grado di interfacciarsi con dispositivi aventi natura, protocolli di comunicazione e organizzazione delle memorie differenti in maniera del tutto versatile. La creazione dell’interfaccia è definita dinamica dal momento che chiaramente varia a seconda del dispositivo con cui deve operare e perché costruita runtime, con grafica adattata alle dimensioni dello schermo del pc dal quale viene avviato. Nel corso del lavoro di tesi, l’applicazione è stata progettata, implementata e testata su un prototipo sperimentale. Sviluppata in ambiente Visual Studio 2017, su piattaforma .NET e in linguaggio C#, è disponibile ora all’installazione a fini di ricerca su macchine Windows a 64 bit del Centro Protesi Inail.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Gang. "Spatiotemporal Sensing and Informatics for Complex Systems Monitoring, Fault Identification and Root Cause Diagnostics." Scholar Commons, 2015. https://scholarcommons.usf.edu/etd/5727.

Full text
Abstract:
In order to cope with system complexity and dynamic environments, modern industries are investing in a variety of sensor networks and data acquisition systems to increase information visibility. Multi-sensor systems bring the proliferation of high-dimensional functional Big Data that capture rich information on the evolving dynamics of natural and engineered processes. With spatially and temporally dense data readily available, there is an urgent need to develop advanced methodologies and associated tools that will enable and assist (i) the handling of the big data communicated by the contemporary complex systems, (ii) the extraction and identification of pertinent knowledge about the environmental and operational dynamics driving these systems, and (iii) the exploitation of the acquired knowledge for more enhanced design, analysis, monitoring, diagnostics and control. My methodological and theoretical research as well as a considerable portion of my applied and collaborative work in this dissertation aims at addressing high-dimensional functional big data communicated by the systems. An innovative contribution of my work is the establishment of a series of systematic methodologies to investigate the complex system informatics including multi-dimensional modeling, feature extraction and selection, model-based monitoring and root cause diagnostics. This study presents systematic methodologies to investigate spatiotemporal informatics of complex systems from multi-dimensional modeling and feature extraction to model-driven monitoring, fault identification and root cause diagnostics. In particular, we developed a multiscale adaptive basis function model to represent and characterize the high-dimensional nonlinear functional profiles, thereby reducing the large amount of data to a parsimonious set of variables (i.e., model parameters) while preserving the information. Furthermore, the complex interdependence structure among variables is identified by a novel self-organizing network algorithm, in which the homogeneous variables are clustered into sub-network communities. Then we minimize the redundancy of variables in each cluster and integrate the new set of clustered variables with predictive models to identify a sparse set of sensitive variables for process monitoring and fault diagnostics. We evaluated and validated our methodologies using real-world case studies that extract parameters from representation models of vectorcardiogram (VCG) signals for the diagnosis of myocardial infarctions. The proposed systematic methodologies are generally applicable for modeling, monitoring and diagnosis in many disciplines that involve a large number of highly-redundant variables extracted from the big data. The self-organizing approach was also innovatively developed to derive the steady geometric structure of a network from the recurrence-based adjacency matrix. As such, novel network-theoretic measures can be achieved based on actual node-to-node distances in the self-organized network topology.
APA, Harvard, Vancouver, ISO, and other styles
44

White, Nathan. "An Empirical Investigation into the Role that Boredom, Relationships, Anxiety, and Gratification (BRAG) Play in a Driver’s Decision to Text." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/71.

Full text
Abstract:
Texting while driving is a growing problem that has serious, and sometimes fatal, consequences. Despite laws enacted to curb this behavior, the problem continues to grow. Discovering factors that can reduce such risky behavior can significantly contribute to research, as well as save lives and reduce property damage. This study developed a model to explore the motivations that cause a driver to send messages. The model evaluates the effects that boredom, social relationships, social anxiety, and social gratification (BRAG) have upon a driver’s frequency of typing text messages. In addition, the perceived severity of the consequences and the presence of a passenger were also be evaluated for any moderating effects on a driver’s texting. Furthermore, a set of hypotheses based on the BRAG model were presented. To investigate these hypotheses, a survey instrument was developed and data was collected from 297 respondents at a mid-sized regional university in the Pacific North west of the United States. Prior to the distribution of the survey, an expert panel and a pilot study were used to ensure the reliability of the instrument. Partial least squares structured equation modeling (PLS-SEM) was used to evaluate the predictive validity of the BRAG model. This evaluation included an assessment of the reflective measures, as well as a detailed analysis of the structural model. Additionally, knowledge visualization techniques were used to emphasize the significance of the findings. The results of this analysis showed that the social gratification one receives from maintaining their social relationships is a significant predictor of texting while driving. Additionally, the results showed that drivers continued to text, regardless of the consequences. However, boredom and social anxiety were not significant predictors of texting while driving. This study makes important contributions to the information systems body of knowledge and has implications for state and local lawmakers, in addition to public health officials. Prior research has shown that bored or anxious individuals use texting to relieve those feelings of discomfort. However, this study did not extend those findings to drivers. As this study found that laws banning texting while driving do not deter this behavior, public health officials and lawmakers should investigate other means of deterring texting while driving, given the significant impact it has on the increase of fatal car accidents in recent years.
APA, Harvard, Vancouver, ISO, and other styles
45

ALMEIDA, Alexandre Cláudio de. "Um Componente para Geração e Evolução de Esquemas de Bancos de Dados como Suporte à Construção de Sistemas de Informação." Universidade Federal de Goiás, 2010. http://repositorio.bc.ufg.br/tede/handle/tde/505.

Full text
Abstract:
Made available in DSpace on 2014-07-29T14:57:47Z (GMT). No. of bitstreams: 1 Dissertacao Valdomeria N de M Morgado 2010.pdf: 793904 bytes, checksum: 6ebfc3ed36711bf080787eaa46eea743 (MD5) Previous issue date: 2010-11-22
An Information System (IS) has three main aspects: a database that contains data which is processed to generate business information; an application functions which transforms data in information; and business rules which control and restrict data manipulated by the functions. An IS evolves continuously to follow the corporation changes, and the database should be change to attend the new requirements. This dissertation presents a model driven approach to generate and evolve IS databases. A software component, called Especialista em Banco de Dados (EBD), was developed. There are two mapping sets for database generation: from Modelo de Meta Objeto (MMO) (used to representing IS) to Relational Model (RM), and from this to DBMS PostgreSQL SQL dialect. The component EBD is a part of a framework for modeling, building and maintaining enterprise information systems software. This component provides services to other framework components. To validate the proposed approach, Software Engineers had developed IS using the component EBD. The Dissertation main contributions are an approach to support IS database life cycle, a software architecture to generate and evolve IS database schema, an IS data representation model (MMO), a mapping specification to generate schema and stored procedures and the definition of automated operation sets to evolve IS database schema.
Um Sistema de Informação (SI) Corporativo tem três aspectos principais: o banco de dados, que contém dados que são processados para gerar informações do negócio; as funções de aplicação, que transformam dados em informações; e as regras de negócio, que controlam e restringem a manipulação dos dados pelas funções. Um SI precisa evoluir continuamente para acompanhar as mudanças na corporação e, consequentemente, o banco de dados deve ser modificado para atender aos novos requisitos de negócio. Esta dissertação apresenta uma abordagem dirigida por modelos para a automatização do processo de transformação na geração e evolução de bancos de dados de Sistema de Informação. Para isso foi criado um componente de software denominado Especialista em Banco de Dados (EBD). Dois conjuntos de mapeamentos são apresentados para a geração de esquemas, um do modelo conceitual chamado Modelo de Meta Objeto (MMO), utilizado para representação de SI, para o Modelo Relacional; e deste para o dialeto SQL do SGBD PostgreSQL. O componente EBD faz parte de um framework que gera, evolui e gerencia Sistemas de Informação. Este componente fornece também serviços para outros componentes deste framework. Uma experimentação foi feita com engenheiros de software com experiência em desenvolvimento de Sistema de Informação para validar a abordagem proposta. As principais contribuições desta dissertação são: abordagem que apoia ciclo de vida de BD de SI, arquitetura de software que permite a geração e evolução de esquema de SI, especificação de um modelo de representação de dados de SI (MMO), especificação de mapeamentos para geração de esquema e procedimentos de manipulação e definição de um conjunto de operações que automatizam o processo de evolução de esquema de BD de SI.
APA, Harvard, Vancouver, ISO, and other styles
46

CARVALHO, Marcus Vinícius Ribeiro de. "UMA ABORDAGEM BASEADA NA ENGENHARIA DIRIGIDA POR MODELOS PARA SUPORTAR MERGING DE BASE DE DADOS HETEROGÊNEAS." Universidade Federal do Maranhão, 2014. http://tedebc.ufma.br:8080/jspui/handle/tede/511.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:53:26Z (GMT). No. of bitstreams: 1 Dissertacao Marcus Vinicius Ribeiro.pdf: 4694533 bytes, checksum: b84a4bad63b098d054781131cfb9bc26 (MD5) Previous issue date: 2014-02-24
Model Driven Engineering (MDE) aims to make face to the development, maintenance and evolution of complex software systems, focusing in models and model transformations. This approach can be applied in other domains such as database schema integration. In this research work, we propose a framework to integrate database schema in the MDE context. Metamodels for defining database model, database model matching, database model merging, and integrated database model are proposed in order to support our framework. An algorithm for database model matching and an algorithm for database model merging are presented. We present also, a prototype that extends the MT4MDE and SAMT4MDE tools in order to demonstrate the implementation of our proposed framework, metodology, and algorithms. An illustrative example helps to understand our proposed framework.
A Engenharia Dirigida por Modelos (MDE) fornece suporte para o gerenciamento da complexidade de desenvolvimento, manutenção e evolução de software, através da criação e transformação de modelos. Esta abordagem pode ser utilizada em outros domínios também complexos como a integração de esquemas de base de dados. Neste trabalho de pesquisa, propomos uma metodologia para integrar schema de base de dados no contexto da MDE. Metamodelos para definição de database model, database model matching, database model merging, integrated database model são propostos com a finalidade de apoiar a metodologia. Um algoritmo para database model matching e um algoritmo para database model merging são apresentados. Apresentamos ainda, um protótipo que adapta e estende as ferramentas MT4MDE e SAMT4MDE a fim de demonstrar a implementação do framework, metodologia e algoritmos propostos. Um exemplo ilustrativo ajuda a melhor entender a metodologia apresentada, servindo para explicar os metamodelos e algoritmos propostos neste trabalho. Uma breve avaliação do framework e diretrizes futuras sobre este trabalho são apresentadas.
APA, Harvard, Vancouver, ISO, and other styles
47

Aunay, Bertrand. "Apport de la stratigraphie séquentielle à la gestion et à la modélisation des ressources en eau des aquifères côtiers." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00275467.

Full text
Abstract:
Lieu de développement économique et démographique intense, les zones littorales font l'objet de pressions importantes sur l'environnement et, en particulier, sur les ressources en eau. Bien que la gestion des eaux souterraines côtières bénéficie de nombreux résultats issus de la recherche scientifique, une des problématiques majeures reste la connaissance de la géométrie des aquifères. Des interprétations géologiques de la genèse du bassin Plio-Quaternaire du Roussillon, issues de la stratigraphie séquentielle, sont confrontées, par l'intermédiaire d'une base données, à l'hydrogéologie de cet hydrosystème complexe localisé sur la partie littorale des Pyrénées-Orientales. L'étude statistique des points de prélèvement (distribution des crépines, productivité des forages...), l'analyse fonctionnelle (traitement du signal des chroniques piézométriques), l'hydrochimie et la géophysique électrique ont été utilisées afin d'élaborer un modèle conceptuel hydrogéologique des écoulements à l'échelle du bassin et de son prolongement vers le domaine offshore. La présence de la mer, de zones à salinité résiduelle et de cours d'eaux littoraux contribue à augmenter la salinité d'un aquifère libre supérieur (Quaternaire) sus-jacent aux différents aquifères captifs (Pliocène) exploités pour l'eau potable dans la zone littorale. La vulnérabilité face aux intrusions salines de cette ressource de bonne qualité, tant sur le point de vue quantitatif que qualitatif est appréhendée par modélisation. Dans le domaine offshore, le rôle protecteur des formations géologiques à faible et moyenne perméabilité est mis en évidence vis-à-vis de la préservation de la qualité de l'eau potable.
APA, Harvard, Vancouver, ISO, and other styles
48

"Client-Driven Dynamic Database Updates." Master's thesis, 2011. http://hdl.handle.net/2286/R.I.9521.

Full text
Abstract:
abstract: This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
Dissertation/Thesis
M.S. Computer Science 2011
APA, Harvard, Vancouver, ISO, and other styles
49

Růžička, Jakub. "Automatizované odvození geometrie jízdních pruhů na základě leteckých snímků a existujících prostorových dat." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-414961.

Full text
Abstract:
The aim of the thesis is to develop a method to identify driving lanes based on aerial images and existing spatial data. The proposed method uses up to date available data in which it identifies road surface marking (RSM). Polygons classified as RSM are further processed to obtain their vector line representation as the first partial result. While processing RSM vectors further, borders of driving lanes are modelled as the second partial result. Furthermore, attempts were done to be able to automatically distinguish between solid and broken lines for a higher amount of information contained in the resulting dataset. Proposed algorithms were tested in 20 case study areas and results are presented further in this thesis. The overall correctness as well as the positional accuracy proves effectivity of the method. However, several shortcomings were identified and are discussed as well as possible solutions for them are suggested. The text is accompanied by more than 70 figures to offer a clear perspective on the topic. The thesis is organised as follows: First, Introduction and Literature review are presented including the problem background, author's motivation, state of the art and contribution of the thesis. Secondly, technical and legal requirements of RSM are presented as well as theoretical concepts and...
APA, Harvard, Vancouver, ISO, and other styles
50

Sukchotrat, Thuntee. "Data mining-driven approaches for process monitoring and diagnosis." 2008. http://hdl.handle.net/10106/1827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography