Dissertations / Theses on the topic 'Digital data collection'

To see the other types of publications on this topic, follow the link: Digital data collection.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 dissertations / theses for your research on the topic 'Digital data collection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brewer, Peter W., and Christopher H. Guiterman. "A new digital field data collection system for dendrochronology." Laboratory of Tree-Ring Research, University of Arizona, 2016. http://hdl.handle.net/10150/622364.

Full text
Abstract:
A wide variety of information or 'metadata' is required when undertaking dendrochronological sampling. Traditionally, researchers record observations and measurements on field notebooks and/or paper recording forms, and use digital cameras and hand-held GPS devices to capture images and record locations. In the lab, field notes are often manually entered into spreadsheets or personal databases, which are then sometimes linked to images and GPS waypoints. This process is both time consuming and prone to human and instrument error. Specialised hardware technology exists to marry these data sources, but costs can be prohibitive for small scale operations (>$2000 USD). Such systems often include proprietary software that is tailored to very specific needs and might require a high level of expertise to use. We report on the successful testing and deployment of a dendrochronological field data collection system utilising affordable off-the-shelf devices ($100-300 USD). The method builds upon established open source software that has been widely used in developing countries for public health projects as well as to assist in disaster recovery operations. It includes customisable forms for digital data entry in the field, and a marrying of accurate GPS location with geotagged photographs (with possible extensions to other measuring devices via Bluetooth) into structured data fields that are easy to learn and operate. Digital data collection is less prone to human error and efficiently captures a range of important metadata. In our experience, the hardware proved field worthy in terms of size, ruggedness, and dependability (e.g., battery life). The system integrates directly with the Tellervo software to both create forms and populate the database, providing end users with the ability to tailor the solution to their particular field data collection needs.
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Darren C., and Dean Tenderholt. "Development Goals for a Digital Airborne Recorder." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/609827.

Full text
Abstract:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper addresses the development requirements for a digital recorder to be used for fighter environment and attack Helicopter applications. This development is focused on triservice requirements to allow for a common system to meet the needs of various test centers.
APA, Harvard, Vancouver, ISO, and other styles
3

Monte, Jamie Marie, and Jamie Marie Monte. "ROCK MASS CHARACTERIZATION USING LASER SCANNING AND DIGITAL IMAGING DATA COLLECTION TECHNIQUES." Thesis, The University of Arizona, 2004. http://hdl.handle.net/10150/621370.

Full text
Abstract:
The primary focus of this research is to evaluate whether laser scanning and digital imaging can provide a reliable means to collect essential rock mass data. Simulated and field case studies were conducted to determine if fracture orientation data (dip angle and dip direction) can be accurately estimated from a laser generated three - dimensional point cloud. Orientations measured with a Brunton Compass were compared to values derived from point clouds. The difference in dip direction was within three degrees and as high as twelve degrees for the dip angle. When fracture sets were estimated for both field and laser data, good correlation in mean set orientation and set distribution was observed. Some sets recorded during field mapping were absent in stereo plots of laser derived data due to a shadow zone created during scanning. This indicated that scanning from multiple locations is necessary to reduce potentially missed data. This thesis also investigated whether the newly proposed Digital Rock Mass Rating (DRMR) system could classify rock masses similar to established systems such as the Geological Strength Index (GSI). The seven DRMR parameters, fracture spacing, length, large -scale roughness, block volume, rock bridge percent, and rock mass texture were calculated for images of poor to good rock masses. When DRMR values were compared to GSI ratings estimated during field work, good correlation was seen for good quality rock masses (GSI between 40 and 60). The DRMR overestimated ratings for outcrops with GSI values less than 40, indicating that the rating system may not be applicable for poor quality rock masses. Additional case studies are needed to further validate the DRMR classification system.
APA, Harvard, Vancouver, ISO, and other styles
4

Fröderberg, Shaiek Emma. "Excessive Data Collection as an Abuse of Dominant Position." Thesis, Stockholms universitet, Juridiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-194959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yee, Tze-Sung. "A hardware based optical digital code scanning system." Ohio : Ohio University, 1988. http://www.ohiolink.edu/etd/view.cgi?ohiou1182536210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tidball, John E. "REAL-TIME HIGH SPEED DATA COLLECTION SYSTEM WITH ADVANCED DATA LINKS." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/609754.

Full text
Abstract:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The purpose of this paper is to describe the development of a very high-speed instrumentation and digital data recording system. The system converts multiple asynchronous analog signals to digital data, forms the data into packets, transmits the packets across fiber-optic lines and routes the data packets to destinations such as high speed recorders, hard disks, Ethernet, and data processing. This system is capable of collecting approximately one hundred megabytes per second of filtered packetized data. The significant system features are its design methodology, system configuration, decoupled interfaces, data as packets, the use of RACEway data and VME control buses, distributed processing on mixedvendor PowerPCs, real-time resource management objects, and an extendible and flexible configuration.
APA, Harvard, Vancouver, ISO, and other styles
7

Stykow, Henriette. "Small data on a large scale : Torn between convenience and surveillance." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-110630.

Full text
Abstract:
Technology has become an inherent part of our daily lives. If we don’t want to abstain from the benefits technology brings, we have to acknowledge the fact that tech generates data and adjust our norms and habits to it. This thesis critiques how corporations and governmental institutions collect, store and analyze data of individuals. It discusses the economic and technological forces that stand behind the collection and usage of data in the past, today, and the near future. Beyond that, it alludes to political implications. The overarching goal is to stimulate reflection about culture and future. To achieve that, the design of an interactive educational web story within the browser is proposed. A curated personal data platform in combination with interactive web stories make data collection, data usage, and the risks of data aggregation visible. Business practices and interests are rendered transparent on the basis of users’ actual online behavior and exposure. The web stories allows to understand the meaning and value of the data traces users leave online. In five chapters, they experience the basic technologies of the Internet, business motivations, and surveillance practices in the context of their individual web browsing behavior. Each chapter invites to explore details of the topic to accommodate for individual need and interest in the matter. A critical reflection on the future of data collection is encouraged, tools and settings within the browser help users to protect their digital identities.
APA, Harvard, Vancouver, ISO, and other styles
8

Palencia, Arreola Daniel Heriberto. "Arguments for and field experiments in democratizing digital data collection : the case of Flocktracker." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121749.

Full text
Abstract:
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [127]-131).
Data is becoming increasingly relevant to urban planning, serving as a key input for many conceptions of a "smart city." However, most urban data generation results from top-down processes, driven by government agencies or large companies. This provides limited opportunities for citizens to participate in the ideation and creation of the data used to ultimately gain insights into, and make decisions about, their communities. Digital community data collection can give more inputs to city planners and decision makers while also empowering communities. This thesis derives arguments from the literature about why it would be helpful to have more participation from citizens in data generation and examines digital community mapping as a potential niche for the democratization of digital data collection.
In this thesis, I examine one specific digital data collection technology, Flocktracker, a smartphone-based tool developed to allow users with no technical background to setup and generate their own data collection projects. I define a model of how digital community data collection could be "democratized" with the use of Flocktracker. The model envisions a process in which "seed" projects lead to a spreading of Flocktracker's use across the sociotechnical landscape, eventually producing self-sustaining networks of data collectors in a community. To test the model, the experimental part of this research examines four different experiments using Flocktracker: one in Tlalnepantla, Mexico and three in Surakarta, Indonesia. These experiments are treated as "seed" projects in the democratization model and were setup in partnership with local NGOs.
The experiments were designed to help understand whether citizen participation in digital community mapping events might affect their perceptions about open data and the role of participation in community data collection and whether this participation entices them to create other community datasets on their own, thus starting the democratization process. The results from the experiments reveal the difficulties in motivating community volunteers to participate in technology-based field data collection. While Flocktracker proved easy enough for the partner organizations to create data collection projects, the technology alone does not guarantee participation. The envisioned "democratization" model could not be validated. Each of the experiments had relatively low levels of participation in the community events that were organized.
This low participation, in turn, led to inconclusive findings regarding the effects of community mapping on participants' perceptions and on the organizations themselves. Nonetheless, numerous insights emerge, providing lessons for the technology and how it might be better used in the future to improve digital community mapping events.
by Daniel Heriberto Palencia Arreola.
M.C.P.
M.C.P. Massachusetts Institute of Technology, Department of Urban Studies and Planning
APA, Harvard, Vancouver, ISO, and other styles
9

Pellegrino, Gregory S. "Design of a Low-Cost Data Acquisition System for Rotordynamic Data Collection." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1978.

Full text
Abstract:
A data acquisition system (DAQ) was designed based on the use of a STM32 microcontroller. Its purpose is to provide a transparent and low-cost alternative to commercially available DAQs, providing educators a means to teach students about the process through which data are collected as well as the uses of collected data. The DAQ was designed to collect data from rotating machinery spinning at a speed up to 10,000 RPM and send this data to a computer through a USB 2.0 full-speed connection. Multitasking code was written for the DAQ to allow for data to be simultaneously collected and transferred over USB. Additionally, a console application was created to control the DAQ and read data, and MATLAB code written to analyze the data. The DAQ was compared against a custom assembled National Instruments CompactDAQ system. Using a Bentley-Nevada RK 4 Rotor Kit, data was simultaneously collected using both DAQs. Analysis of this data shows the capabilities and limitations of the low cost DAQ compared to the custom CompactDAQ.
APA, Harvard, Vancouver, ISO, and other styles
10

Swartz, Clinton Keith. "Digital data collection and analysis: what are the effects on students' understanding of chemistry concepts." Montana State University, 2012. http://etd.lib.montana.edu/etd/2012/swartz/SwartzC0812.pdf.

Full text
Abstract:
In this project, digital data collection and analysis methods were implemented to determine their effects on student understanding of chemistry concepts, data analysis and conclusion making skills, and motivation. Teacher attitude and motivation were also determined. The students included in the project were from a 10th grade chemistry class, which included 25 students. Students completed a non-treatment unit in which data collection and analysis were completed without the use of technology. Digital data collection and analysis were then added to experiments and class activities during two treatment units. The digital data collection and analysis tools included data collection interfaces and probes, graphing software and simulations. The non-treatment unit and treatment units were then compared to determine the effectiveness of the intervention. Students understanding of chemistry concepts, data analysis and conclusion making, and motivation increased slightly after the treatment units. Teacher attitude and motivation also showed an increase. This project showed that the use of digital data collection and analysis has positive effects on both the students and the teacher.
APA, Harvard, Vancouver, ISO, and other styles
11

Jones, James R. "A Client-Server Architecture for Collection of Game-based Learning Data." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51229.

Full text
Abstract:
Advances in information technology are driving massive improvement to the education industry. The ubiquity of mobile devices has triggered a shift in the delivery of educational content. More lessons in a wide range of subjects are being disseminated by allowing students to access digital materials through mobile devices. One of the key materials is digital-based educational games. These games merge education with digital games to maximize engagement while somewhat obfuscating the learning process. The effectiveness is generally measured by assessments, either after or during gameplay, in the form of quizzes, data dumps, and/or manual analyses. Valuable gameplay information lost during the student's play sessions. This gameplay data provides educators and researchers with specific gameplay actions students perform in order to arrive at a solution, not just the correctness of the solution. This problem illustrates a need for a tool, enabling educators and players to quickly analyze gameplay data. in conjunction with correctness in an unobtrusive manner while the student is playing the game. This thesis describes a client-server software architecture that enables the collection of game-based data during gameplay. We created a collection of web services that enables games to transmit game-data for analysis. Additionally, the web application provides players with a portal to login and view various visualization of the captured data. Lastly, we created a game called "Taffy Town", a mathematics-based game that requires the player to manipulate taffy pieces in order to solve various fractions. Taffy Town transmits students' taffy transformations along with correctness to the web application. Students are able to view several dynamically created visualizations from the data sent by Taffy Town. Researchers are able to log in to the web application and see the same visualizations, however, aggregated across all Taffy Town players. This end-to-end mapping of problems, actions, and results will enable researchers, pedagogists, and teachers to improve the effectiveness of educational games.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Vidiksis, Adam. "Vidiksis-Transfigurations [Digital File]." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/253842.

Full text
Abstract:
Music Composition
D.M.A.
Transfigurations is a symphonic work in one movement for orchestra and live computer processing utilizing the graphical audio programming language Pure Data. The score and patch for this piece are accompanied by an essay describing the audio processing techniques and the compositional processes employed in this work. Programming methods discussed include strategies for data capture, patch structure, user interface, and processor management. All audio processing in the work is realized in realtime. These sounds are derived directly from the orchestra in performance, except for the last. The processes involved in Transfigurations include pitch and amplitude tracking, pitch-shifting, filtering, frequency and amplitude modulation, granular synthesis, delay, and convolution. The final sounds from the computer employ stochastic processes for synthesis which are derived from the germinal materials of the piece. The essay also discusses the aesthetic philosophy and formal structure of the work, principle themes and motives, and formative pitch materials, as well as the compositional processes in each section. The final discourse of the essay considers microphone and loudspeaker setups, patch preparation and leveling, and strategies for rehearsal and performance.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
13

Tidball, Kyle D. "The design of an FPGA based embedded data collection system, with application to surface profiling." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/14131.

Full text
Abstract:
Master of Science
Department of Electrical and Computer Engineering
Dwight D. Day
Over the last several years, the use of Field Programmable Gate Arrays, or FPGAs, has become increasingly popular in the embedded systems field. However, FPGAs are typically used only as a coprocessor or dedicated DSP. This project proposes that an embedded system can realize a performance gain over a traditional microprocessor-based design and be made more flexible and extensible by using an FPGA as the primary processing device in the embedded system. Basing a design on an FPGA also allows new features to be much more rapidly developed and integrated into the system. This will be shown by designing an FPGA based embedded system for Surface Systems & Instruments’ Walking Profiler device. The system will include support for rotary encoders, an incline sensor for data collection, and an Ethernet protocol for communication with a Windows computer. The implementation of a sub sampling distance measuring algorithm will be used to demonstrate the tradeoffs between hardware, software, and development times.
APA, Harvard, Vancouver, ISO, and other styles
14

Mott, Dana. "Developing Participant Investment within Digital Interactive Stories." Honors in the Major Thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/782.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf
B.S.
Bachelors
Arts and Sciences
Digital Media
APA, Harvard, Vancouver, ISO, and other styles
15

Khalilikhah, Majid. "Traffic Sign Management: Data Integration and Analysis Methods for Mobile LiDAR and Digital Photolog Big Data." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4744.

Full text
Abstract:
This study links traffic sign visibility and legibility to quantify the effects of damage or deterioration on sign retroreflective performance. In addition, this study proposes GIS-based data integration strategies to obtain and extract climate, location, and emission data for in-service traffic signs. The proposed data integration strategy can also be used to assess all transportation infrastructures’ physical condition. Additionally, non-parametric machine learning methods are applied to analyze the combined GIS, Mobile LiDAR imaging, and digital photolog big data. The results are presented to identify the most important factors affecting sign visual condition, to predict traffic sign vandalism that obstructs critical messages to drivers, and to determine factors contributing to the temporary obstruction of the sign messages. The results of data analysis provide insight to inform transportation agencies in the development of sign management plans, to identify traffic signs with a higher likelihood of failure, and to schedule sign replacement.
APA, Harvard, Vancouver, ISO, and other styles
16

Lommen, Candice M. "How does the use of digital photography affect student observation skills and data collection during outdoor field studies?" Montana State University, 2012. http://etd.lib.montana.edu/etd/2012/lommen/LommenC0812.pdf.

Full text
Abstract:
The purpose of this project was to determine if adding digital photography as a tool for collecting data during outdoor field study would increase student engagement and also improve the quality of the data students brought back to the classroom. Too often my students would come in from the field with data that focused on surface or irrelevant features. They were unable to use their data to make connections to the ecology concepts we were learning in the classroom. During the non-treatment phase of the study, students recorded all of their data through drawings and written observations. While at their plots, students inventoried the vegetation present and also took specific measurements such as tree circumference, canopy cover and invasive plant cover. Before taking the cameras out to the field, students practiced with the macro settings to take close up pictures of vegetation brought into the classroom. During the treatment phase, students took digital cameras out to their new plots to inventory and measure plants. Student engagement data was measured using a self-assessment questionnaire, outside observer behavior checklist and teacher field journal. Although interest and engagement were high for most students during the entire study, students who were not initially engaged in the field study activities reported higher engagement levels when cameras were used. The outside observer and teacher journal data supported this finding. The quality of student data was measured using both the student self-assessment questionnaire and drawing or photo rubrics. Rubric scores increased when students used photographs, rather than drawings, to write observations. Students felt they had more to write about when looking at their pictures as compared to their drawings. Interestingly, students reported they wrote less while at their plots when they had the camera, relying on their pictures to tell the story of their plot. Using photos only slightly increased students' ability to positively identify their plants. Pictures lacked those complex features that would enable students to easily work their way through a basic key. To increase the complexity of observations, additional content knowledge about plant structure and ecology is needed.
APA, Harvard, Vancouver, ISO, and other styles
17

Harbour, Kenton Dean. "A data acquisition system with switched capacitor sample-and-hold." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/15269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Loverus, Anna, and Paulina Tellebo. "There ain ́t no such thing as a free lunch : What consumers think about personal data collection online." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-315656.

Full text
Abstract:
This study examines how consumers reason and their opinions about personal data collection online. Its focus is to investigate whether consumers consider online data collection as an issue with moral implications, and if these are unethical. This focus is partly motivated by the contradiction between consumers’ stated opinions and actual behavior, which differ. To meet its purpose, the study poses the research question How is personal data collection and its prevalence online perceived and motivated by consumers?. The theoretical framework consists of the Issue-Contingent Model of Ethical Decision-Making by Jones (1991), thus putting the model to use in a new context. Collection of data for the study was done by conducting focus groups, since Jones’ model places ethical decision- making in a social context. The results of the study showed that consumers acknowledge both positive and negative aspects of online data collection, but the majority of them do not consider this data collection to be unethical. This result confirms partly the behaviour that consumers already display, but does not explain why their stated opinions do not match this. Thus, this study can be seen as an initial attempt at clarifying consumer reasoning on personal data collection online, with potential for future studies to further investigate and understand consumer online behaviour.
Denna uppsats undersöker hur konsumenter resonerar och tänker kring insamling av personlig data på Internet. Fokus är att utreda ifall konsumenter anser att denna insamling har konsekvenser, och ifall dessa anses vara oetiska. Detta fokus baseras delvis på resultat som visar på skillnader i vad konsumenter uttrycker för åsikter kring detta ämne, och deras faktiska beteende på Internet. Undersökningen utgår ifrån forskningsfrågan som lyder Hur uppfattar och motiverar konsumenter insamling av personlig data på Internet? Studiens teoretiska ramverk består av modellen An Issue-Contingent model of Ethical Decision- Making som är utvecklad av Jones (1991), och modellen används därmed i en ny kontext. Studiens data samlades in genom fokusgrupper. Detta val baserades på Jones (1991) modell, som menar att etiskt beslutsfattande alltid sker i en social kontext. De resultat som kommit fram visar att konsumenter ser både positiva och negativa aspekter och konsekvenser av att ha sin personliga data insamlad, däremot utan att anse att insamlingen i sig är oetisk. Detta bekräftar delvis tidigare resultat, men förklarar inte varför de åsikter konsumenter uttrycker kring ämnet inte stämmer överens med hur de sedan faktiskt beter sig. Därmed kan den här uppsatsen ses som ett första försök att klargöra hur konsumenter resonerar kring insamling av personlig data på Internet. Det har bedömts finnas mycket potential för framtida studier inom samma område, för att fortsatt undersöka och förstå konsumenters beteende på Internet.
APA, Harvard, Vancouver, ISO, and other styles
19

Trapl, Erika Shaun. "UNDERSTANDING ADOLESCENT SURVEY RESPONSES: IMPACT OF MODE AND OTHER CHARACTERISTICS ON DATA OUTCOMES AND QUALITY." Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1175893391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bates, Lakesha. "ANALYSIS OF TIME SYNCHRONIZATION ERRORS IN HIGH DATA RATE ULTRAWIDEBAN." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2582.

Full text
Abstract:
Emerging Ultra Wideband (UWB) Orthogonal Frequency Division Multiplexing (OFDM) systems hold the promise of delivering wireless data at high speeds, exceeding hundreds of megabits per second over typical distances of 10 meters or less. The purpose of this Thesis is to estimate the timing accuracies required with such systems in order to achieve Bit Error Rates (BER) of the order of magnitude of 10-12 and thereby avoid overloading the correction of irreducible errors due to misaligned timing errors to a small absolute number of bits in error in real-time relative to a data rate of hundreds of megabits per second. Our research approach involves managing bit error rates through identifying maximum timing synchronization errors. Thus, it became our research goal to determine the timing accuracies required to avoid operation of communication systems within the asymptotic region of BER flaring at low BERs in the resultant BER curves. We propose pushing physical layer bit error rates to below 10-12 before using forward error correction (FEC) codes. This way, the maximum reserve is maintained for the FEC hardware to correct for burst as well as recurring bit errors due to corrupt bits caused by other than timing synchronization errors.
M.S.E.E.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
21

Badawi, Hawazin Faiz. "DT-DNA: Devising a DNA Paradigm for Modeling Health Digital Twins." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41906.

Full text
Abstract:
The potential of Digital twin (DT) technology outside of the industrial field has been recognized by researchers who have promoted the vision of applying DTs technology beyond manufacturing, to purposes such as enhancing human well-being and improving quality of life (QoL). The expanded definition of DTs to incorporate living and nonliving physical entities into the definition of DTs was a key motivation behind the model introduced in this thesis for building health digital twins of citizens. In contrast with DTs that have been developed in more industrial fields, this type of digital twins modeling necessitates protecting each citizen's unique identity while also representing features common to all citizens in a unified way. In nature, DNA is an example of a model that is both unified, common to all humans, and unique, distinguishing each human as an individual. DNA’s architecture is what inspired us to propose a digital twin DNA (DT-DNA) model as the basis for building health DTs for citizens. A review of the literature shows that no unified model for citizens’ health has been developed that can act as a base for building digital twins of citizens while also protecting their unique identity thus we aim to fill this gap in this research. Accordingly, in this thesis, we proposed a DT-DNA model, which is specifically designed to protect the unique identity of each citizen’s digital twin, similar to what DNA does for each human. We also proposed a DT-DNA-based framework to build standardized health digital twins of citizens on micro, meso and macro levels using two ISO standards: ISO/IEEE 11073 (X73) and ISO 37120. To achieve our goal, we started by analyzing the biological DNA model and the influencing factors shaping health in smart cities. The purpose of the first is to highlight the DNA model features which provide the building blocks for our DT-DNA model. The purpose of the latter is to determine the main bases of our DT-DNA model of health DTs. Based on the analysis results; we proposed DT-DNA to model health DTs for citizens. In keeping with our DNA analogy, we have identified four bases, A, T, G, and C, for our unified and unique DT-DNA model. The A base in the proposed model represents a citizen’s anthropometric when we build the DT-DNA on an individual level and represents the city’s regulatory authorities when we build the DT-DNA on community and city levels. The T base represents different tasks included in the provided health data that are required to model citizens’ health DT-DNA on different levels. The G base represents the geographic and temporal information of the city, where the citizen exists at the time of data collection. The C base represents the context at the time of data collection. To proof the concept, we present our initial work on building health DTs for citizens in four case studies. The first two case studies are dedicated for health DTs at the micro level, the third case study is dedicated for health DTs at the meso level and the fourth case study is dedicated for health DTs at the macro level. In addition, we developed an algorithm to compare cities in terms of their community fitness and health services status. The four case studies provide promising results in terms of applicability of the proposed DT-DNA model and framework in handling the health data of citizens, communities and cities, collected through various sources, and presenting them in a standardized, unique model.
APA, Harvard, Vancouver, ISO, and other styles
22

Ciprys, Michal. "Systém pro sběr dat s Raspberry Pi." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400631.

Full text
Abstract:
This work deals with the collection of data from analog sensors, their storage and display using the Raspberry Pi microcomputer. In more detail it deals with selecting the appropriate analog-to-digital converter, selecting the appropriate storage and database server, web server and application to display the measured data.
APA, Harvard, Vancouver, ISO, and other styles
23

McBryde, J. D. "Experimental and numerical modelling of gravity currents preceding backdrafts /." Christchurch, N.Z. : Dept. of Civil Engineering, University of Canterbury, 2008. http://digital-library.canterbury.ac.nz/data/collection3/etd/adt-NZCU20080116.132247/.

Full text
Abstract:
"A thesis submitted in partial fulfilment of the requirements for the degree of Master of Engineering in Fire Engineering."
Includes bibliographical references (p. 209-215). Also available via the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
24

Kräutli, Florian. "Visualising cultural data : exploring digital collections through timeline visualisations." Thesis, Royal College of Art, 2016. http://researchonline.rca.ac.uk/1774/.

Full text
Abstract:
This thesis explores the ability of data visualisation to enable knowl-edge discovery in digital collections. Its emphasis lies on time-based visualisations, such as timelines. Although timelines are among the earliest examples of graphical renderings of data, they are often used merely as devices for linear storytelling and not as tools for visual analysis. Investigating this type of visualisation reveals the particular challenges of digital timelines for scholarly research. In addition, the intersection between the key issues of time-wise visualisation and digital collections acts as a focal point. Departing from authored temporal descriptions in collections data, the research examines how curatorial decisions influence collec-tions data and how these decisions may be made manifest in timeline visualisations. The thesis contributes a new understanding of the knowledge embedded in digital collections and provides practical and conceptual means for making this knowledge accessible and usable. The case is made that digital collections are not simply represen-tations of physical archives. Digital collections record not only what is known about the content of an archive. Collections data contains traces of institutional decisions and curatorial biases, as well as data related to administrative procedures. Such ‘hidden data’ – information that has not been explicitly recorded, but is nevertheless present in the dataset – is crucial for drawing informed conclusions from dig-itised cultural collections and can be exposed through appropriately designed visualisation tools. The research takes a practice-led and collaborative approach, work-ing closely with cultural institutions and their curators. Functional prototypes address issues of visualising large cultural datasets and the representation of uncertain and multiple temporal descriptions that are typically found in digital collections. The prototypes act as means towards an improved understanding of and a critical engagement with the time-wise visualisation of col-lections data. Two example implementations put the design principles that have emerged into practice and demonstrate how such tools may assist in knowledge discovery in cultural collections. Calls for new visualisation tools that are suitable for the purposes of humanities research are widespread in the scholarly community. However, the present thesis shows that gaining new insights into digital collections does not only require technological advancement, but also an epistemological shift in working with digital collections. This shift is expressed in the kind of questions that curators have started seeking to answer through visualisation. Digitisation requires and affords new ways of interrogating collections that depart from putting the collected artefact and its creator at the centre of human-istic enquiry. Instead, digital collections need to be seen as artefacts themselves. Recognising this leads curators to address self-reflective research questions that seek to study the history of an institution and the influence that individuals have had on the holdings of a collection; questions that so far escaped their areas of research.
APA, Harvard, Vancouver, ISO, and other styles
25

Ware, Scott. "HFS Plus File System Exposition and Forensics." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5559.

Full text
Abstract:
The Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can't interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process. The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that isn't available in other tools. We will also use Apple's command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated.
ID: 031001395; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed May 28, 2013).; Thesis (M.S.)--University of Central Florida, 2012.; Includes bibliographical references (p. 131-132).
M.S.
Masters
Computer Science
Engineering and Computer Science
Digital Forensics
APA, Harvard, Vancouver, ISO, and other styles
26

Monostori, Krisztian 1975. "Efficient computational approach to identifying overlapping documents in large digital collections." Monash University, School of Computer Science and Software Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Croft, David. "Semi-automated co-reference identification in digital humanities collections." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10491.

Full text
Abstract:
Locating specific information within museum collections represents a significant challenge for collection users. Even when the collections and catalogues exist in a searchable digital format, formatting differences and the imprecise nature of the information to be searched mean that information can be recorded in a large number of different ways. This variation exists not just between different collections, but also within individual ones. This means that traditional information retrieval techniques are badly suited to the challenges of locating particular information in digital humanities collections and searching, therefore, takes an excessive amount of time and resources. This thesis focuses on a particular search problem, that of co-reference identification. This is the process of identifying when the same real world item is recorded in multiple digital locations. In this thesis, a real world example of a co-reference identification problem for digital humanities collections is identified and explored. In particular the time consuming nature of identifying co-referent records. In order to address the identified problem, this thesis presents a novel method for co-reference identification between digitised records in humanities collections. Whilst the specific focus of this thesis is co-reference identification, elements of the method described also have applications for general information retrieval. The new co-reference method uses elements from a broad range of areas including; query expansion, co-reference identification, short text semantic similarity and fuzzy logic. The new method was tested against real world collections information, the results of which suggest that, in terms of the quality of the co-referent matches found, the new co-reference identification method is at least as effective as a manual search. The number of co-referent matches found however, is higher using the new method. The approach presented here is capable of searching collections stored using differing metadata schemas. More significantly, the approach is capable of identifying potential co-reference matches despite the highly heterogeneous and syntax independent nature of the Gallery, Library Archive and Museum (GLAM) search space and the photo-history domain in particular. The most significant benefit of the new method is, however, that it requires comparatively little manual intervention. A co-reference search using it has, therefore, significantly lower person hour requirements than a manually conducted search. In addition to the overall co-reference identification method, this thesis also presents: • A novel and computationally lightweight short text semantic similarity metric. This new metric has a significantly higher throughput than the current prominent techniques but a negligible drop in accuracy. • A novel method for comparing photographic processes in the presence of variable terminology and inaccurate field information. This is the first computational approach to do so.
APA, Harvard, Vancouver, ISO, and other styles
28

Williams, Vicki Higginbotham. "Isometric forces transmitted by the digits: data collection using a standardized protocol." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53199.

Full text
Abstract:
Data collection on isometric forces exerted by means of the digits, is a virtually untapped research area. However, such data would prove particularly useful in areas such as hand-tool and control design, and also in medical evaluation. A standardized protocol is necessary if a sound, useful data base is to be built. This study developed such a protocol and data were collected using the defined protocol. The study also showed that occupational level (defined by tools and controls used) and gender both had significant effects on certain strength exertions of the digits. Therefore the appropriate data must be collected, depending on the intended use and user population. Regression equations were produced which predicted the strength exertions using anthropometric measurements which are commonly available. Although some particular exertions were not well predicted, the potential of prediction was verified.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Comrie, Fiona S. "An evaluation of the effectiveness of tailored dietary feedback from a novel online dietary assessment method for changing the eating habits of undergraduate students." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2008. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=25224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Knudson, Ryan Charles. "Automatic Language Identification for Metadata Records: Measuring the Effectiveness of Various Approaches." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801895/.

Full text
Abstract:
Automatic language identification has been applied to short texts such as queries in information retrieval, but it has not yet been applied to metadata records. Applying this technology to metadata records, particularly their title elements, would enable creators of metadata records to obtain a value for the language element, which is often left blank due to a lack of linguistic expertise. It would also enable the addition of the language value to existing metadata records that currently lack a language value. Titles lend themselves to the problem of language identification mainly due to their shortness, a factor which increases the difficulty of accurately identifying a language. This study implemented four proven approaches to language identification as well as one open-source approach on a collection of multilingual titles of books and movies. Of the five approaches considered, a reduced N-gram frequency profile and distance measure approach outperformed all others, accurately identifying over 83% of all titles in the collection. Future plans are to offer this technology to curators of digital collections for use.
APA, Harvard, Vancouver, ISO, and other styles
31

Bourdenet, Philippe. "L'espace documentaire en restructuration : l'évolution des services des bibliothèques universitaires." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00932683.

Full text
Abstract:
Le catalogue occupe une place privilégiée dans l'offre de service des bibliothèques universitaires, pivot de l'intermédiation. Depuis 10 ans, il traverse une crise grave, voyant les usagers le délaisser à la faveur des moteurs de recherche généralistes. Le web, plus qu'un sérieux concurrent, devance aujourd'hui les systèmes d'information documentaires, et devient le point d'entrée principal pour la recherche d'information. Les bibliothèques tentent de structurer un espace documentaire qui soit habité par les usagers, au sein duquel se développe l'offre de service, mais celle-ci se présente encore comme une série de silos inertes, sans grande possibilité de navigation, malgré de considérables efforts d'ingénierie et des pistes d'évolution vers les outils de découverte. La profession, consciente de cette crise profonde, après avoir accusé les remous occasionnés par la dimension disruptive du numérique, cherche des moyens pour adapter et diversifier son offre, fluidifier la diffusion de l'information, et se réinvente un rôle d'intermédiation en cherchant à tirer profit des nouvelles pratiques des usagers, de leurs nouvelles attentes, et de nouvelles perspectives. Les bibliothèques placent leur espoir dans de nouveaux modèles de données, tentent d'y ajouter un niveau d'abstraction favorisant les liaisons avec l'univers de la connaissance. L'évolution vers le web sémantique semble une opportunité à saisir pour valoriser les collections et les rendre exploitables dans un autre contexte, au prix d'importants efforts que cette analyse tente de mesurer. Une approche constructiviste fondée sur l'observation participante et le recueil de données offre une vision issue de l'intérieur de la communauté des bibliothèques sur l'évolution des catalogues et des outils d'intermédiation, et ouvre des perspectives sur leurs enjeux.
APA, Harvard, Vancouver, ISO, and other styles
32

Santos, Nuno Gonçalo Mateus. "Dark Web Module Data Collection." Master's thesis, 2018. http://hdl.handle.net/10316/83543.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia
Este documento é o artefacto resultante de um estágio proposto pela Dognaedis, Lda à Universidade de Coimbra. A Dognaedis é uma empresa de ciber segurança, que utiliza informação recolhida pelas ferramentas ao ser dispor para proteger os seus clientes. Havia, no entanto, um vazio que precisava de ser preenchido nas fontes de informação que eram monitorizadas: a dark web. Para preencher esse vazio este estágio foi criado. O seu objectivo é especificar, e implementar, uma solução para um módulo de inteligência da dark web para um dos produtos da empresa, o Portolan. O intuito deste modulo é fazer crawl a websites ”escondidos” em redes de anonimato e deles extrair inteligência, de forma a estender as fontes de informação da plataforma. Neste documento o leitor irá encontrar informação relativa ao trabalho de pesquisa realizado, que compreende o estado da arte a nı́vel de web crawlers e extractores de informação, que permitiu a identificação de técnicas e tecnologias úteis neste âmbito. A especificação da solução para o problema é também apresentada, incluindo a análise de requisitos e o desenho arquitetural. Isto inclui a exposição das funcionalidades propostas, da arquitetura final e das razões por trás da decisões tomadas. Ao leitor será também apresentada uma descrição da metodologia de desenvolvimento que foi seguida e uma descrição da implementação em si, expondo as funcionalidades do módulo e como estas foram atingidas. Finalizando, existe ainda a explicação do processo de validação, que foi realizado de forma a garantir que o produto final estava de acordo com a especificação.
This document is the resulting artefact of an intership proposed by Dognaedis, Lda to the University of Coimbra. Dognaedis is a cyber security company, that uses the information gathered by the tools at its disposal to be able to protect its clients. There was, however, a void that needed to be filled from the sources of information that were monitored: the dark web. In order to fill that void, this intership was created. Its goal is to specify, and implement, a solution for a dark web intelligence module for one of the company’s products, Portolan. The goal of this module is to crawl websites ”hidden” in anonymity networks and extract intelligence from them, in order to extend the sources of information of the platform. As a result, in this document the reader will find information that refers to the research work, that comprises the state of the art regarding web crawlers and information extractors, which allowed the identification of useful techniques and technologies. The specification of the solution for the problem is also presented, including requirement analysis and architectural design. This includes the exposition of the functionalities proposed, the final architecture and the reasons behind the decisions that were made. The reader will also be presented with a description of the development methodology that was followed and a description of the implementation itself, exposing the functionalities of the module and how they were achieved. Finally, there is also the explanation of the validation process, that was conducted to ensure that the final product matched the specification.
APA, Harvard, Vancouver, ISO, and other styles
33

Nicholson, Scott. "Bibliomining for Automated Collection Development in a Digital Library Setting: Using Data Mining to Discover Web-Based Scholarly Research Works." 2003. http://hdl.handle.net/10150/106521.

Full text
Abstract:
Based off Nicholson's 2000 University of North Texas dissertation, "CREATING A CRITERION-BASED INFORMATION AGENT THROUGH DATA MINING FOR AUTOMATED IDENTIFICATION OF SCHOLARLY RESEARCH ON THE WORLD WIDE WEB" located at http://scottnicholson.com/scholastic/finaldiss.doc
This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based on facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and non-scholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, non-parametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model on test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models. The resulting models could be used in the selection process to automatically create a digital library of Web-based scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.
APA, Harvard, Vancouver, ISO, and other styles
34

Janowiak, Elena. "Sex and ancestry estimation using computed tomography: a comparison of the reliability of digital versus physical data collection." Thesis, 2020. https://hdl.handle.net/2144/42154.

Full text
Abstract:
Sex and ancestry are most commonly estimated by anthropologists using the skull. Typically, measurements and observations are taken on the skull itself, but for the purpose of convenience, computed tomography (CT) scans are increasingly used in place of skulls in research and forensic casework. Researchers work under the assumption that the dry skull-to-CT scan ratio is one-to-one; however, research on the accuracy of CT scans is sparse. In this study, eight skulls from the Boston University Donated Skeletal Collection were scored for sex and ancestral morphological traits following Buikstra and Ubelaker (1994) and Hefner and Ousley (2014), and measured using standard cranial measurements according to Langley et al. (2016). CT scans were then taken of the eight skulls and the same morphological observations and measurements were taken using the RadiAnt 5.5.1 CT viewer. Additionally, the measurements of each skull and scan were entered into FORDISC 3.1, a software program that provides discriminant functions for the processes of sex and ancestry estimation. The measurements for each dry skull-CT scan pairing were then analyzed for variance and mean differences. The results of the morphological and metric analyses indicate that the majority of the data gathered from dry skulls did not vary significantly from the measurements taken on the CT scans. The morphological sex estimation resulted in the same estimation for each skull-to-CT scan pairing; however, the morphological ancestry estimation results indicated that skeletal information lost in CT scans can make full visualization and therefore assessment of the facial region difficult. The FORDISC 3.1 results generally support the indication that there is not a significant difference between skull and CT scan measurements, with consistent sex estimation results for each dry skull-to-CT scan pairing and consistent ancestry estimation results for the majority of the pairings. However, the sex and ancestry estimations were not always accurate considering the true ancestral backgrounds of the individuals. Based on these outcomes, it is evident that CT scans can be used to obtain reliable morphological assessments and measurements of a skull, which can then be used to estimate sex using FORDISC 3.1. However, to ensure accuracy of the sex and ancestry estimations, other methods should be used in conjunction with FORDISC 3.1.
APA, Harvard, Vancouver, ISO, and other styles
35

Chung, Cheng-Hao, and 鍾政豪. "A Study on the Preservation of Business Data in the Times of Informationization :A Study on the Evidence Collection of Digital Evidence by Anti-identification Tools." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/q2gj4w.

Full text
Abstract:
碩士
樹德科技大學
經營管理研究所
106
Due to the increasing of modern digital technology and media, lifestyle has been becoming convenient, many business activities are also moving toward IT management. Therefore, many people have also begun to regard information equipment as a criminal device (tool), place, or even become a target of crimes. The case began to grow year by year, and the technology and tools for committing crimes have also made great progress. Comparatively , when law enforcement come into contact with so-called information-related commercial crimes, the level of difficulty they have to face is much more complex than lifestyle before, which will leave traces after committing crime. However, when information technology scrambles to achieve another new goal, I believe that as of today, no information expert can be sure to use the forensic tools to effectively and completely collect the available digital evidence to block the invasion of commercial criminals. Thus, the purpose of this study is to explore whether research using forensic tools can completely prevent the outflow of confidential business information, also expect that the research on the forensic tools and actual operation results, and by experimental analysis and the correlation between the preservation of "digital evidence" and the "forensic tool", explore whether it is possible to fully ensure the storage of any commercially valuable information in the context of the era of technological turmoil by information tools or technologies which enable the company''s companies to fully invest in product and manufacture without any worries about the company’s trade secrets and operating models being intercepted by information criminals. Effectively ensure that commercial secrets are not stolen or deleted by people is the purpose of this study, and being available for follow-up researchers to implement practical reference.
APA, Harvard, Vancouver, ISO, and other styles
36

Molepo, Isaih Kgabe. "Data acquisition system for pilot mill." Diss., 2016. http://hdl.handle.net/10500/22967.

Full text
Abstract:
This dissertation describes the development, design, implementation and evaluation of a data acquisition system, with the main aim of using it for data collection on a laboratory pilot ball mill. An open-source prototype hardware platform was utilised in the implementation of the data acquisition function, however, with limitations. An analogue signal conditioning card has been successfully developed to interface the analogue signals to the dual domain ADC module. Model-based software development was used to design and develop the algorithms to control the DAS acquisition process, but with limited capabilities. A GUI application has been developed and used for the collection and storage of the raw data on the host system. The DAS prototype was calibrated and collected data successfully through all the channels; however, the input signal bandwidth was limited to 2Hz.
Electrical and Mining Engineering
M. Tech. (Electrical Engineering)
APA, Harvard, Vancouver, ISO, and other styles
37

Mabunda, Nkateko Eshias. "Comparison of conventional DAQ systems and embedded DAQ systems." Thesis, 2015. http://hdl.handle.net/10210/13797.

Full text
Abstract:
M. Tech. (Electrical & Electronic Engineering)
In this research we compare conventional data acquisition system (DAS) with the embedded data acquisition systems. The performance specifications of 4 different types of DAQ cards are drawn up with special emphasis made on the following parameters: Slew rate, settling time, relative accuracy and system noise. These parameters are taken from 2 conventional DAS and then compared to those taken from 2 embedded data acquisition systems under the same electrical conditions. The embedded DAQ system’s hardware was built using the PIC Microcontroller interfaced to the Digital to Analog Convertors (DAC). MPLAB C18 is used to create a program which communicates with the embedded DAQ system, to transmit generated signals. National Instrument's LabView is used to create a program which communicates with the conventional DAQ system, to acquire external generated signals and retransmit the signals. In most cases the performance of conventional and embedded are close, but one of the embedded DAS seem to be unstable at high frequencies.
APA, Harvard, Vancouver, ISO, and other styles
38

National, Science Board (NSB). "Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century: Report of the National Science Board (Pre-publication draft, Approved by the National Science Board May 26, 2005, subject to final editorial changes.)." 2005. http://hdl.handle.net/10150/105473.

Full text
Abstract:
From the Executive Summary of the 67 page Report: The National Science Board (NSB, the Board) recognizes the growing importance of these digital data collections for research and education, their potential for broadening participation in research at all levels, the ever increasing National Science Foundation (NSF, the Foundation) investment in creating and maintaining the collections, and the rapid multiplication of collections with a potential for decades of curation. In response the Board formed the Long-lived Data Collections Task Force. The Board and the task force undertook an analysis of the policy issues relevant to long-lived digital data collections. This report provides the findings and recommendations arising from that analysis. The primary purpose of this report is to frame the issues and to begin a broad discourse. Specifically, the NSB and NSF working together â with each fulfilling its respective responsibilities â need to take stock of the current NSF policies that lead to Foundation funding of a large number of data collections with an indeterminate lifetime and to ask what deliberate strategies will best serve the multiple research and education communities. The analysis of policy issues in Chapter IV and the specific recommendations in Chapter V of this report provide a framework within which that shared goal can be pursued over the coming months. The broader discourse would be better served by interaction, cooperation, and coordination among the relevant agencies and communities at the national and international levels. Chapters II and III of this report, describing the fundamental elements of data collections and curation, provide a useful reference upon which interagency and international discussions can be undertaken. The Board recommends that the Office of Science and Technology Policy (OSTP) take the lead in initiating and coordinating these interagency and international discussions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography