Dissertations / Theses on the topic 'Output and Data Devices'

To see the other types of publications on this topic, follow the link: Output and Data Devices.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Output and Data Devices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Thanawala, Rajiv P. "Development of G-net (a software system for graph theory & algorithms) with special emphasis on graph rendering on raster output devices." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834618.

Full text
Abstract:
In this thesis we will describe the development of software functions that render graphical and textual information of G-Net(A software system for graph theory & algorithms) onto various raster output devices.Graphs are mathematical structures that are used to model very diverse systems such as networks, VLSI design, chemical compounds and many other systems where relations between objects play an important role. The study of graph theory problems requires many manipulative techniques. A software system (such as G-Net) that can automate these techniques will be a very good aid to graph theorists and professionals. The project G-Net, headed by Prof. Kunwarjit S. Bagga of the computer science department has the goal of developing a software system having three main functions. These are: learning basics of graph theory, drawing/manipulating graphs and executing graph algorithms.The thesis will begin with an introduction to graph theory followed by a brief description of the evolution of the G-Net system and its current status. To print on various printers, the G-Net system translates all the printable information into PostScript' files. A major part of this thesis concentrates on this translation. To begin with, the necessity of a standard format for the printable information is discussed. The choice of PostScript as a standard is then justified. Next,the design issues of translator and the translation algorithm are discussed in detail. The translation process for each category of printable information is explained. Issues of printing these PostScript files onto different printers are dealt with at the end.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Parker, William P. (William Peter). "Output devices for dynamic electronic holography." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/12714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qureshi, Muhammad Bilal. "Cardiac Output Improvement in Mechanical Circulatory Support Devices." Diss., North Dakota State University, 2017. https://hdl.handle.net/10365/28793.

Full text
Abstract:
Mechanical circulatory support devices (MCSDs) have gained widespread clinical acceptance as an effective heart failure (HF) therapy. The concept of harnessing the kinetic energy (KE) available in the forward aortic flow (AOF) is proposed as a novel control strategy to further increase the cardiac output (CO) provided by MCSDs. A complete mathematical development of the proposed theory and its application to an example MCSDs (two-segment extra-aortic cuff) are presented. To achieve improved device performance and physiologic benefit, the example MCSD timing is regulated to maximize the forward AOF KE and minimize retrograde flow. The proof-of-concept was tested to provide support with and without KE control in a computational HF model over a wide range of HF test conditions. The simulation predicted increased stroke volume (SV) by 20% (9 mL), CO by 23% (0.50 L/min), left ventricle ejection fraction (LVEF) by 23%, and diastolic coronary artery flow (CAF) by 55% (3 mL) in severe HF at a heart rate (HR) of 60 beats per minute (BPM) during counterpulsation (CP) support with KE control. This research also explains how selection of inflation and deflation timing points for extra-aortic two-segmented cuff counterpulsation device (CPD) can affect the hemodynamic of the cardiovascular system (CVS). A comprehensive analysis of compliance profile timings generated through exhaustive search technique and the one selected through steepest descent method is carried out to predict and compare the difference in SV via computer simulation models. The influence of control modes (timing and duration) of deflation and inflation for extra-aortic two-segmented CPD on hemodynamic factors compared to no-assist HF were investigated. Simulation results (P < 0.05) predicted that the two-segmented CPD with late deflation and early inflation mode would be a suitable mode with 80% augmentation in peak diastolic aortic pressure (AOP), reduction in peak systolic pressure up to 15%, increases in CO by 60% and mean CAF by 80%. The proposed KE control concept may improve performance of other MCSDs to further enhance their potential clinical benefits, which warrants further investigation. The next step is to investigate various assist technologies and determine where this concept is best applied.
COMSATS (Pakistan)
APA, Harvard, Vancouver, ISO, and other styles
4

Romeike, Ralf. "Output statt Input." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2013/6431/.

Full text
Abstract:
Die in der Fachdidaktik Informatik im Zusammenhang mit den Bildungsstandards seit Jahren diskutierte Outputorientierung wird mittelfristig auch für die Hochschullehre verbindlich. Diese Änderung kann als Chance aufgefasst werden, aktuellen Problemen der Informatiklehre gezielt entgegenzuwirken. Basierend auf der Theorie des Constructive Alignment wird vorgeschlagen, im Zusammenhang mit der Outputorientierung eine Abstimmung von intendierter Kompetenz, Lernaktivität und Prüfung vorzunehmen. Zusätzlich profitieren Lehramtsstudenten von den im eigenen Lernprozess erworbenen Erfahrungen im Umgang mit Kompetenzen: wie diese formuliert, erarbeitet und geprüft werden. Anforderungen an die Formulierung von Kompetenzen werden untersucht, mit Beispielen belegt und Möglichkeiten zur Klassifizierung angeregt. Ein Austausch in den Fachbereichen und Fachdidaktiken über die individuell festgelegten Kompetenzen wird vorgeschlagen, um die hochschuldidaktische Diskussion zu bereichern.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Lin. "Data quality and data cleaning in database applications." Thesis, Edinburgh Napier University, 2012. http://researchrepository.napier.ac.uk/Output/5788.

Full text
Abstract:
Today, data plays an important role in people's daily activities. With the help of some database applications such as decision support systems and customer relationship management systems (CRM), useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There are many reasons to cause the failure, such as poor system infrastructure design or query performance. But nothing is more certain to yield failure than lack of concern for the issue of data quality. High quality of data is a key to today's business success. The quality of any large real world data set depends on a number of factors among which the source of the data is often the crucial factor. It has now been recognized that an inordinate proportion of data in most data sources is dirty. Obviously, a database application with a high proportion of dirty data is not reliable for the purpose of data mining or deriving business intelligence and the quality of decisions made on the basis of such business intelligence is also unreliable. In order to ensure high quality of data, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. This thesis is focusing on the improvement of data quality in database applications with the help of current data cleaning methods. It provides a systematic and comparative description of the research issues related to the improvement of the quality of data, and has addressed a number of research issues related to data cleaning. In the first part of the thesis, related literature of data cleaning and data quality are reviewed and discussed. Building on this research, a rule-based taxonomy of dirty data is proposed in the second part of the thesis. The proposed taxonomy not only summarizes the most dirty data types but is the basis on which the proposed method for solving the Dirty Data Selection (DDS) problem during the data cleaning process was developed. This helps us to design the DDS process in the proposed data cleaning framework described in the third part of the thesis. This framework retains the most appealing characteristics of existing data cleaning approaches, and improves the efficiency and effectiveness of data cleaning as well as the degree of automation during the data cleaning process. Finally, a set of approximate string matching algorithms are studied and experimental work has been undertaken. Approximate string matching is an important part in many data cleaning approaches which has been well studied for many years. The experimental work in the thesis confirmed the statement that there is no clear best technique. It shows that the characteristics of data such as the size of a dataset, the error rate in a dataset, the type of strings in a dataset and even the type of typo in a string will have significant effect on the performance of the selected techniques. In addition, the characteristics of data also have effect on the selection of suitable threshold values for the selected matching algorithms. The achievements based on these experimental results provide the fundamental improvement in the design of 'algorithm selection mechanism' in the data cleaning framework, which enhances the performance of data cleaning system in database applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Igoniderigha, Nseabasi Ekaette. "Data navigation and visualization : navigating coordinated multiple views of data." Thesis, Edinburgh Napier University, 2015. http://researchrepository.napier.ac.uk/Output/8832.

Full text
Abstract:
The field of coordinated and multiple views (CMVs) has been for over a decade, a promising technique for enhancing data visualization, yet that promise remains unfulfilled. Current CMVs lack a platform for flexible execution of certain kinds of open-ended tasks consequently users' are unable to achieve novel objectives. Navigation of data, though an important aspect of interactive visualization, has not generated the level of attention it should from the human computer interaction community. A number of frameworks for and categorization of navigation techniques exist, but further detailed studies are required to highlight the range of benefits improved navigation can achieve in the use of interactive tools such as CMVs. This thesis investigates the extent of support offered by CMVs to people navigating information spaces, in order to discover data, visualize these data and retrieve adequate information to achieve their goals. It also seeks to understand the basic principle of CMVs and how to apply its procedure to achieve successful navigation. Three empirical studies structured around the user's goal as they navigate CMVs are presented here. The objective of the studies is to propose a simple, but strong, design procedure to support future development of CMVs. The approach involved a comparative analysis of qualitative and quantitative experiments comprising of categorised navigation tasks carried out, initially on existing CMVs and subsequently on CMVs which had been redesigned applying the proposed design procedure. The findings show that adequate information can be retrieved, with successful navigation and effective visualization achieved more easily and in less time, where metadata is provided alongside the relevant data within the CMVs to facilitate navigation. This dissertation thus proposes and evaluates a novel design procedure to aid development of more navigable CMVs.
APA, Harvard, Vancouver, ISO, and other styles
7

Löfving, Erik. "Organizing physical flow data : from input-output tables to data warehouses /." Linköping : Dept. of Mathematics, Univ, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/stat5s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leung, Chiu Hon. "The output frequency spectrum of a thyristor phase-controlled cycloconverter using digital control techniques." Thesis, University of Plymouth, 1985. http://hdl.handle.net/10026.1/2261.

Full text
Abstract:
The principle of operation dictates that the output of a cycloconverter contains some harmonics. For drive applications, the harmonics at best increase losses in the motor and may well cause instability. Various methods of analysing the output waveform have been considered. A Fortran 77 program employing a modified Fourier series, making use of the fact that the input waveforms are sinusoidal, was used to compute the individual harmonic amplitudes. A six pulse three phase to single phase cycloconverter was built and a Z-80 microprocessor was used for the control of firing angles. Phase locked loops were used for timing, and their effect upon the output with changing input frequency and voltage were established. The experimental waveforms are analysed by a FFT spectrum analyser. The flexibility of the control circuit enables the following investigations not easily carry out using traditional analog control circuit. The phase relationship between the cosine timing and reference wave in the cosinusoidal control method was shown to affect the output waveform and hence the harmonic content. There is no clear optimum value of phase and the T.H.D. up to 500Hz remains virtually constant. However, the changes of individual harmonic amplitudes is quite significant. In practice it may not be possible to keep the value of phase constant but it should be considered when comparing control strategies. Another investigation involves the changing of the last firing angle in a half cycle. It shows that the value of firing angles produced by the cosinusoidal control method is desirable. Operation at theoretical maximum output frequency was also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
9

Zha, Xi. "Supporting multiple output devices on an ad-hoc basis in visualisation." Lincoln University, 2010. http://hdl.handle.net/10182/1391.

Full text
Abstract:
In recent years, new visualisation techniques and devices, such as remote visualisation and stereoscopic displays, have been developed to help researchers. In a remote visualisation environment the user may want to see visualisation on a different device, such as a PDA or stereo device, and in different circumstances. Each device needs to be configured correctly, otherwise it may lead to an incorrect rendering of the output. For end users, however, it can be difficult to configure each device without a knowledge of the device property and rendering. Therefore, in a multiple user and multiple display environment, to obtain the correct display for each device can be a challenge. In this project, the focus on investigating a solution that can support end users to use different display devices easily. The proposed solution is to develop an application that can support the ad-hoc use of any display device without the system being preconfigured in advance. Thus, end users can obtain the correct visualisation output without any complex rendering configuration. We develop a client-server based approach to this problem. The client application can detect the properties of a device and the server application can use these properties to configure the rendering software to generate the correct image for subsequent display on the device. The approach has been evaluated through many tests and the results show that using the application is a useful in helping end users use different display devices in visualisation.
APA, Harvard, Vancouver, ISO, and other styles
10

Kosek, Anna. "Ontology based knowledge formulation and an interpretation engine for intelligent devices in pervasive environments." Thesis, Edinburgh Napier University, 2011. http://researchrepository.napier.ac.uk/Output/6037.

Full text
Abstract:
Ongoing device miniaturization makes it possible to manufacture very small devices; therefore more of them can be embedded in one space. Pervasive computing con- cepts, envisioning computers distributed in a space and hidden from users' sight, presented by Weiser in 1991 are becoming more realistic and feasible to implement. A technology supporting pervasive computing and Ambient Intelligence also needs to follow miniaturization. The Ambient Intelligence domain was mainly focused on supercomputers with large computation power and it is now moving towards smaller devices, with limited computation power, and takes inspiration from dis- tributed systems, ad-hoc networks and emergent computing. The ability to process knowledge, understand network protocols, adapt and learn is becoming a required capability from fairly small and energy-frugal devices. This research project con- sists of two main parts. The first part of the project has created a context aware generic knowledgebase interpretation engine that enables autonomous devices to pervasively manage smart spaces using Communicating Sequential Processes as the underlying design methodology. In the second part a knowledgebase containing all the information that is needed for a device to cooperate, make decisions and react was designed and constructed. The interpretation engine is designed to be suitable for devices from different vendors, as it enables semantic interoperability based on the use of ontologies. The knowledge, that the engine interprets, is drawn from an ontology and the model of the chosen ontology is fixed in the engine. This project has investigated, designed and built a prototype of the knowledge base interpretation engine. Functional testing was performed using a simulation implemented in JCSP. The implementation simulates many autonomous devices running in parallel, communicating using a broadcast-based protocol, self-organizing into sub-networks and reacting to users' requests. The main goal of the project was to design and investigate the knowledge interpretation engine, determine the number of functions that the engine performs, to enable hardware realisation, and investigate the knowledgebase represented with use of RDF triples and chosen ontology model. This project was undertaken in collaboration with NXP Semiconductor Research Eindhoven, The Netherlands.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Huanlu. "Integrated photonic devices for data communications." Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.682683.

Full text
Abstract:
With the increasing capacity requirements of telecommunication systems, the ability to increase capacity density is of great importance for optical transmission technologies. This thesis presents several integrated photonic devices (semiconductor laser diodes and silicon devices) aimed at boosting the capacity density of the optical transmission systems. The first part of the thesis is about four wave mixing (FWM) effects in semiconductor ring lasers. Mode beating via third order nonlinearity in semiconductor ring lasers has been analysed using a frequency-domain multi-mode rate equation model. Compared with Fabry-Perot lasers, semiconductor ring lasers are found to be 1.33, 2, and 4 times more efficient in self-gain compression, cross-gain compression and four-wave mixing processes, respectively, due to its travelling-wave nature. It is shown that, using dual (pump and signal) external optical injections into the ring laser cavity, multiple modes can be locked in phase via the strong four wave mixing phenomenon. This results in modulation of the light wave at the mode beating frequencies which could be used for RF optical catrier generation. Secondly, following Bristol's research on compact optical vortex beam emitter based on silicon photonic micro-ring resonators, a different approach is demonstrated to simultaneously generate a pair of orbital angular momentum (OAM) modes with opposite topological charge by integrating a micro-ring OAM resonator with simple waveguide devices. The relative phase between two vortices can be actively modulated on the chip by thermo-optical controls. Furthermore, based on the ring cavity structure, OAM ring lasers on AlGaInAs/InP wafer are also developed. Detailed designs, fabrication processes and characterization of the device are discussed. In the last part of the thesis, a new approach is to propose and demonstrate directly generated optical OAM beams, by integrating a micro-scale spiral phase plate (SPP) on top of a vertical-cavity surface-emitting laser (VCSEL). The presence of the multi-level SPP transforms the linearly polarized Gaussian beam to a beam carrying specific OAM modes and their superposition states. The emitted OAM beams are characterized by usmg a spatial light modulator (SLM), and show good agreement with semi-analytical numerical simulation. The innovative OAM emitter opens a new horizon in the field of OAM-based optical and quantum communications, especially for low-cost short reach interconnects.
APA, Harvard, Vancouver, ISO, and other styles
12

Brozovic, Martin. "ON EFFICIENT AUTOMATED METHODS FOR SIMULATION OUTPUT DATA ANALYSIS." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5857.

Full text
Abstract:
With the increase in computing power and software engineering in the past years computer based stochastic discrete-event simulations have become very commonly used tool to evaluate performance of various, complex stochastic systems (such as telecommunication networks). It is used if analytical methods are too complex to solve, or cannot be used at all. Stochastic simulation has also become a tool, which is often used instead of experimentation in order to save money and time by the researchers. In this thesis, we focus on the statistical correctness of the final estimated results in the context of steady-state simulations performed for the mean analysis of performance measures of stable stochastic processes. Due to various approximations the final experimental coverage can differ greatly from the assumed theoretical level, where the final confidence intervals cover the theoretical mean at much lower frequency than it was expected by the preset theoretical confidence level. We present the results of coverage analysis for the methods of dynamic partially-overlapping batch means, spectral analysis and mean squared error optimal dynamic partially-overlapping batch means. The results show that the variants of dynamic partially-overlapping batch means, that we propose as their modification under Akaroa2, perform acceptably well for the queueing processes, but perform very badly for auto-regressive process. We compare the results of modified mean squared error optimal dynamic partially-overlapping batch means method to the spectral analysis and show that the methods perform equally well.
+420 723 771 283
APA, Harvard, Vancouver, ISO, and other styles
13

McLaughlin, Anne Collins. "Attentional demands on input devices in a complex task." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/30305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Fang, Jun. "Design of an ATM switch and implementation of output scheduler /." Title page, contents and abstract only, 1999. http://web4.library.adelaide.edu.au/theses/09ENS/09ensf211.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kwecka, Zbigniew. "Cryptographic privacy-preserving enhancement method for investigative data acquisition." Thesis, Edinburgh Napier University, 2011. http://researchrepository.napier.ac.uk/Output/4437.

Full text
Abstract:
The current processes involved in the acquisition of investigative data from third parties, such as banks, Internet Service Providers (ISPs) and employers, by the public authorities can breach the rights of the individuals under investigation. This is mainly caused by the necessity to identify the records of interest, and thus the potential suspects, to the dataholders. Conversely, the public authorities often put pressure on legislators to provide a more direct access to the third party data, mainly in order to improve on turnaround times for enquiries and to limit the likelihood of compromising the investigations. This thesis presents a novel methodology for improving privacy and the performance of the investigative data acquisition process. The thesis shows that it is possible to adapt Symmetric Private Information Retrieval (SPIR) protocols for use in the acquisition process, and that it is possible to dynamically adjust the balance between the privacy and performance based on the notion of k-anonymity. In order to evaluate the findings an Investigative Data Acquisition Platform (IDAP) is formalised, as a cryptographic privacy-preserving enhancement to the current data acquisition process. SPIR protocols are often computationally intensive, and therefore, they are generally unsuitable to retrieve records from large datasets, such as the ISP databases containing records of the network traffic data. This thesis shows that, despite the fact that many potential sources of investigative data exist, in most cases the data acquisition process can be treated as a single-database SPIR. Thanks to this observation, the notion of k-anonymity, developed for privacy-preserving statistical data-mining protocols, can be applied to the investigative scenarios, and used to narrow down the number of records that need to be processed by a SPIR protocol. This novel approach makes the application of SPIR protocols in the retrieval of investigative data feasible. The dilution factor is defined, by this thesis, as a parameter that expresses the range of records used to hide a single identity of a suspect. Interestingly, the value of this parameter does not need to be large in order to protect privacy, if the enquiries to a given dataholder are frequent. Therefore, IDAP is capable of retrieving an interesting record from a dataholder in a matter of seconds, while an ordinary SPIR protocol could take days to complete retrieval of a record from a large dataset. This thesis introduces into the investigative scenario a semi-trusted third party, which is a watchdog organisation that could proxy the requests for investigative data from all public authorities. This party verifies the requests for data and hides the requesting party from the dataholder. This limits the dataholders ability to judge the nature of the enquiry. Moreover, the semi-trusted party would filter the SPIR responses from the dataholders, by securely discarding the records unrelated to enquiries. This would prevent the requesting party from using a large computational power to decrypt the diluting records in the future, and would allow the watchdog organisation to verify retrieved data in court, if such a need arises. Therefore, this thesis demonstrates a new use for the semi-trusted third parties in SPIR protocols. Traditionally used to improve on the complexity of SPIR protocols, such party can potentially improve the perception of the cryptographic trapdoor-based privacy- preserving information retrieval systems, by introducing policy-based controls. The final contribution to knowledge of this thesis is definition of the process of privacy-preserving matching records from different datasets based on multiple selection criteria. This allows for the retrieval of records based on parameters other than the identifier of the interesting record. Thus, it is capable of adding a degree of fuzzy matching to the SPIR protocols that traditionally require a perfect match of the request to the records being retrieved. This allows for searching datasets based on circumstantial knowledge and suspect profiles, thus, extends the notion of SPIR to more complex scenarios. The constructed IDAP is thus a platform for investigative data acquisition employing the Private Equi-join (PE) protocol – a commutative cryptography SPIR protocol. The thesis shows that the use of commutative cryptography in enquiries where multiple records need to be matched and then retrieved (m-out-of-n enquiries) is beneficial to the computational performance. However, the above customisations can be applied to other SPIR protocols in order to make them suitable for the investigative data acquisition process. These customisations, together with the findings of the literature review and the analysis of the field presented in this thesis, contribute to knowledge and can improve privacy in the investigative enquiries.
APA, Harvard, Vancouver, ISO, and other styles
16

Weir, Robert. "A configuration approach for selecting a data warehouse architecture." Thesis, Edinburgh Napier University, 2008. http://researchrepository.napier.ac.uk/Output/2445.

Full text
Abstract:
Living in the Information Age, organisations must be able to exploit their data alongside the traditional economic resources of man, machine and money. Accordingly, organisations implement data warehouses to organise and consolidate their data, which creates a decision support system that is “subject oriented”, “time variant”, “integrated” and “non-volatile”. However, the organisation's ability to successfully exploit their data is determined by the degree of strategic alignment. As such, this study poses the question: how can a data warehouse be successfully and demonstrably aligned to an organisation's strategic objectives? This thesis demonstrates that strategic alignment can be achieved by following a new "top down" data warehouse implementation framework, the Configuration Approach, which is based upon determining an organisation's target configuration. This was achieved by employing Miles and Snow's Ideal Types to formulate a questionnaire that reveals an organisation's target configuration in terms of its approach to the Entrepreneurial, Administration and Information Systems challenges. Crucially, this thesis also provides the means to choose a data warehouse architecture that is wholly based on the organisation's target configuration. The Configuration Approach was evaluated using a single case study undergoing a period of strategic transformation where the implementation of a data warehouse was key to its strategic ambitions. The case study illustrated how it is possible to articulate an organisation's strategic configuration, which becomes the key driver for building a warehouse that demonstrably supports the resolution of its Entrepreneurial and Administration challenges. Significantly, the case study also provides a unique opportunity to demonstrate how the target configuration helps organisations to make the right choice of data warehouse architecture to satisfy the Information Systems challenge. In this case, the Configuration Approach provides a basis for challenging the architectural choices made by a consultancy on behalf of the participating organisation. Accordingly, it can be asserted that data warehouses are strategic investments, if implemented using the Configuration Approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Raguenaud, Cedric. "Managing complex taxonomic data in an object-oriented database." Thesis, Edinburgh Napier University, 2002. http://researchrepository.napier.ac.uk/Output/6572.

Full text
Abstract:
This thesis addresses the problem of multiple overlapping classifications in object-oriented databases through the example of plant taxonomy. These multiple overlapping classifications are independent simple classifications that share information (nodes and leaves), therefore overlap. Plant taxonomy was chosen as the motivational application domain because taxonomic classifications are especially complex and have changed over long periods of time, therefore overlap in a significant manner. This work extracts basic requirements for the support of multiple overlapping classifications in general, and in the context of plant taxonomy in particular. These requirements form the basis on which a prototype is defmed and built. The prototype, an extended object-oriented database, is extended from an object-oriented model based on ODMG through the provision of a relationship management mechanism. These relationships form the main feature used to build classifications. This emphasis on relationships allows the description of classifications orthogonal to the classified data (for reuse and integration of the mechanism with existing databases and for classification of non co-operating data), and allows an easier and more powerful management of semantic data (both within and without a classification). Additional mechanisms such as integrity constraints are investigated and implemented. Finally, the implementation of the prototype is presented and is evaluated, from the point of view of both usability and expressiveness (using plant taxonomy as an application), and its performance as a database system. This evaluation shows that the prototype meets the needs of taxonomists.
APA, Harvard, Vancouver, ISO, and other styles
18

Lo, Owen. "Heart data analysis, modelling and application in risk assessment." Thesis, Edinburgh Napier University, 2015. http://researchrepository.napier.ac.uk/Output/8833.

Full text
Abstract:
The heart is a fundamental aspect of the human body. Significant work has been undertaken to better understand the characteristics and mechanisms of this organ in past research. Greater understanding of the heart not only provides advances in medicine but also enables practitioners to better assess the health risk of patients. This thesis approaches the study of the heart from a health informatics perspective. The questions posed in this thesis is whether research is capable of describing and modelling heart data from a statistical perspective, along with exploring techniques to improve the accuracy of clinical risk assessment algorithms that rely on this data. The contributions of this thesis may be grouped into two main areas: statistical analysis, modelling and simulation of heart data; and improved risk assessment accuracy of the Early Warning Score (EWS) algorithm using a quartile-based technique. Statistical analysis of heart data, namely RR intervals, contributes to more informed understanding of the underlying characteristics of the heart and is achieved using null-hypothesis testing through the Anderson-Darling (AD) test statistic. The modelling process of heart data demonstrates methodologies for simulation of this data type, namely individual distribution modelling and normal mixture modelling, and contributes to assessing techniques that are most capable of modelling this type of data. For improved accuracy on the EWS algorithms, a quartiles technique, inspired by anomaly-based intrusion detection systems, is presented which enables customisation of risk score thresholds for each patient defined during a training phase. Simulated heart data is used to evaluate the standard EWS algorithm against the quartile-based approach. The defined metric of accuracy ratio provides quantitative evidence on the accuracy of the standard EWS algorithm in comparison with the proposed quartile based technique. Statistical analysis in this thesis demonstrates that samples of heart data can be described using normal, Weibull, logistic and gamma distribution within the scope of two minute data samples. When there is strong evidence to suggest that RR intervals analysed fits a particular distribution, individual modelling technique is the ideal candidate whilst normal mixture modelling is better suited for long-term modelling, i.e. greater than two minutes of heart data. In comparative evaluation of the standard EWS algorithm and the quartile-based technique using modelled heart data, greater accuracy is demonstrated in the quartiles-based technique for patients whose heart rate is healthy, but outside the normal ranges of the general population.
APA, Harvard, Vancouver, ISO, and other styles
19

Barreto, John. "Heterogeneous data source access for mobile devices." FIU Digital Commons, 2003. http://digitalcommons.fiu.edu/etd/1407.

Full text
Abstract:
The mediator software architecture design has been developed to provide data integration and retrieval in distributed, heterogeneous environments. Since the initial conceptualization of this architecture, many new technologies have emerged that can facilitate the implementation of this design. The purpose of this thesis was to show that a mediator framework supporting users of mobile devices could be implemented using common software technologies available today. In addition, the prototype was developed with a view to providing a better understanding of what a mediator is and to expose issues that will have to be addressed in full, more robust designs. The prototype developed for this thesis was implemented using various technologies including: Java, XML, and Simple Object Access Protocol (SOAP) among others. SOAP was used to accomplish inter-process communication. In the end, it is expected that more data intensive software applications will be possible in a world with ever-increasing demands for information.
APA, Harvard, Vancouver, ISO, and other styles
20

Olsson, Jakob, and Viktor Yberg. "Log data filtering in embedded sensor devices." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175367.

Full text
Abstract:
Data filtering is the disposal of unnecessary data in a data set, to save resources such as server capacity and bandwidth. The method is used to reduce the amount of stored data and thereby prevent valuable resources from processing insignificant information.The purpose of this thesis is to find algorithms for data filtering and to find out which algorithm gives the best effect in embedded devices with resource limitations. This means that the algorithm needs to be resource efficient in terms of memory usage and performance, while saving enough data points to avoid modification or loss of information. After an algorithm has been found it will also be implemented to fit the Exqbe system.The study has been done by researching previously done studies in line simplification algorithms and their applications. A comparison between several well-known and studied algorithms has been done to find which suits this thesis problem best.The comparison between the different line simplification algorithms resulted in an implementation of an extended version of the Ramer-Douglas-Peucker algorithm. The algorithm has been optimized and a new filter has been implemented in addition to the algorithm.
Datafiltrering är att ta bort onödig data i en datamängd, för att spara resurser såsom serverkapacitet och bandbredd. Metoden används för att minska mängden lagrad data och därmed förhindra att värdefulla resurser används för att bearbeta obetydlig information. Syftet med denna tes är att hitta algoritmer för datafiltrering och att undersöka vilken algoritm som ger bäst resultat i inbyggda system med resursbegränsningar. Det innebär att algoritmen bör vara resurseffektiv vad gäller minnesanvändning och prestanda, men spara tillräckligt många datapunkter för att inte modifiera eller förlora information. Efter att en algoritm har hittats kommer den även att implementeras för att passa Exqbe-systemet. Studien är genomförd genom att studera tidigare gjorda studier om datafiltreringsalgoritmer och dess applikationer. Jämförelser mellan flera välkända algoritmer har utförts för att hitta vilken som passar denna tes bäst. Jämförelsen mellan de olika filtreringsalgoritmerna resulterade i en implementation av en utökad version av Ramer-Douglas-Peucker-algoritmen. Algoritmen har optimerats och ett nytt filter har implementerats utöver algoritmen.
APA, Harvard, Vancouver, ISO, and other styles
21

Baykal, Emre. "Ad-hoc Data Transfer for Android Devices." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-24808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Caspár, Sophia. "Visualization of tabular data on mobile devices." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-68036.

Full text
Abstract:
This thesis evaluates various ways of displaying tabular data on mobile devices using different responsive table solutions. It also presents a tool to help web developers and designers in the process of choosing and implementing a suitable table approach. The proposed solution for this thesis is a web system called The Visualizing Wizard that allows the user to answer some questions about the intended table and then get a recommended responsive table solution generated based on the answers. The system uses a rule-based approach via Prolog to match the answers to a set of rules and provide an appropriate result. In order to determine which table solutions are more appropriate to use for which type of data a statistical analysis and user tests were performed. The statistical analysis contains an investigation to identify the most common table approaches and data types used on various websites. The result indicates that solutions such as "squish", "collapse by rows", "click" and "scroll" are most common. The most common table categories are product comparison, product offerings, sports and stock market/statistics. This information was used to implement and establish user tests to collect feedback and opinions. The data and statistics gathered from the user tests were mapped into sets of rules to answer the question of which responsive table solution is more appropriate to use for which type of data. This serves as the foundation for The Visualizing Wizard.
APA, Harvard, Vancouver, ISO, and other styles
23

Cardwell, Gregory S. "Residual network data structures in Android devices." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5506.

Full text
Abstract:
Approved for public release; distribution is unlimited
The emergence and recent ubiquity of Smartphones present new opportunities and challenges to forensic examiners. Smartphones enable new mobile application and use paradigms by being constantly attached to the Internet via one of several physical communication media, e.g. cellular radio, WiFi, or Bluetooth. The Smartphone's storage medium represents a potential source of current and historical network metadata and records of prior data transfers. By using known ground truth data exchanges in a controlled experimental environment, this thesis identifies network metadata stored by the Android operating system that can be readily retrieved from the device's internal non-volatile storage. The identified network metadata can ascertain the identity of prior network access points to which the device associated. An important by-product of this research is a well-labeled Android Smartphone image corpus, allowing the mobile forensic community to perform repeatable, scientific experiments, and to test mobile forensic tools.
APA, Harvard, Vancouver, ISO, and other styles
24

Mittermaier, Marion Petra. "Investigating synergies between weather radar data and mesoscale model output." Thesis, University of Reading, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gstach, Dieter. "A statistical framework for estimating output-specific efficiencies." Inst. für Volkswirtschaftstheorie und -politik, WU Vienna University of Economics and Business, 2003. http://epub.wu.ac.at/354/1/document.pdf.

Full text
Abstract:
This paper presents a statistical framework for estimating output-specific efficiencies for the 2-output case based upon a DEA frontier estimate. The key to the approach is the concept of target output-mix. Being usually unobserved, target output-mixes of firms are modelled as missing data. Using this concept the relevant data generating process can be formulated. The resulting likelihood function is analytically intractable, so a data augmented Bayesian approach is proposed for estimation purposes. This technique is adapted to the present purpose. Some implementation issues are discussed leading to an empirical Bayes setup with data informed priors. A prove of scale invariance is provided. (author's abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
26

Kerracher, Natalie. "Tasks and visual techniques for the exploration of temporal graph data." Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/977758.

Full text
Abstract:
This thesis considers the tasks involved in exploratory analysis of temporal graph data, and the visual techniques which are able to support these tasks. There has been an enormous increase in the amount and availability of graph (network) data, and in particular, graph data that is changing over time. Understanding the mechanisms involved in temporal change in a graph is of interest to a wide range of disciplines. While the application domain may differ, many of the underlying questions regarding the properties of the graph and mechanism of change are the same. The research area of temporal graph visualisation seeks to address the challenges involved in visually representing change in a graph over time. While most graph visualisation tools focus on static networks, recent research has been directed toward the development of temporal visualisation systems. By representing data using computer-generated graphical forms, Information Visualisation techniques harness human perceptual capabilities to recognise patterns, spot anomalies and outliers, and find relationships within the data. Interacting with these graphical representations allow individuals to explore large datasets and gain further insightinto the relationships between different aspects of the data. Visual approaches are particularly relevant for Exploratory Data Analysis (EDA), where the person performing the analysis may be unfamiliar with the data set, and their goal is to make new discoveries and gain insight through its exploration. However, designing visual systems for EDA can be difficult, as the tasks which a person may wish to carry out during their analysis are not always known at outset. Identifying and understanding the tasks involved in such a process has given rise to a number of task taxonomies which seek to elucidate the tasks and structure them in a useful way. While task taxonomies for static graph analysis exist, no suitable temporal graph taxonomy has yet been developed. The first part of this thesis focusses on the development of such a taxonomy. Through the extension and instantiation of an existing formal task framework for general EDA, a task taxonomy and a task design space are developed specifically for exploration of temporal graph data. The resultant task framework is evaluated with respect to extant classifications and is shown to address a number of deficiencies in task coverage in existing works. Its usefulness in both the design and evaluation processes is also demonstrated. Much research currently surrounds the development of systems and techniques for visual exploration of temporal graphs, but little is known about how the different types of techniques relate to one another and which tasks they are able to support. The second part of this thesis focusses on the possibilities in this area: a design spaceof the possible visual encodings for temporal graph data is developed, and extant techniques are classified into this space, revealing potential combinations of encodings which have not yet been employed. These may prove interesting opportunities for further research and the development of novel techniques. The third part of this work addresses the need to understand the types of analysis the different visual techniques support, and indeed whether new techniques are required. The techniques which are able to support the different task dimensions are considered. This task-technique mapping reveals that visual exploration of temporalgraph data requires techniques not only from temporal graph visualisation, but also from static graph visualisation and comparison, and temporal visualisation. A number of tasks which are unsupported or less-well supported, which could prove interesting opportunities for future research, are identified. The taxonomies, design spaces, and mappings in this work bring order to the range of potential tasks of interest when exploring temporal graph data and the assortmentof techniques developed to visualise this type of data, and are designed to be of use in both the design and evaluation of temporal graph visualisation systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Corsini, Julien. "Analysis and evaluation of network intrusion detection methods to uncover data theft." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/output/4031/.

Full text
Abstract:
Nowadays, the majority of corporations mainly use signature-based intrusion detection. This trend is partly due to the fact that signature detection is a well-known technology, as opposed to anomaly detection which is one of the hot topics in network security research. A second reason for this fact may be that anomaly detectors are known to generate many alerts, the majority of which being false alarms. Corporations need concrete comparisons between different tools in order to choose which is best suited for their needs. This thesis aims at comparing an anomaly detector with a signature detector in order to establish which is best suited to detect a data theft threat. The second aim of this thesis is to establish the influence of the training period length of an anomaly Intrusion Detection System (IDS) on its detection rate. This thesis presents a Network-based Intrusion Detection System (NIDS) evaluation testbed setup. It shows the setup of two IDSes, the signature detector Snort and the anomaly detector Statistical Packet Anomaly Detection Engine (SPADE). The evaluation testbed also includes the setup of a data theft scenario (reconnaissance, brute force attack on server and data theft). The results from the experiments carried out in this thesis proved inconclusive, mainly due to the fact that the anomaly detector SPADE requires a configuration adapted to the network monitored. Despite the fact that the experimental results proved inconclusive, this thesis could act as documentation for setting up a NIDS evaluation testbed. It could also be considered as documentation for the anomaly detector SPADE. This statement is made from the observation that there is no centralised documentation about SPADE, and not a single research paper documents the setup of an evaluation testbed.
APA, Harvard, Vancouver, ISO, and other styles
28

Alava, Mónica Hernández. "Growth dynamics : an empirical investigation of output growth using international data." Thesis, University of Leicester, 2002. http://hdl.handle.net/2381/30140.

Full text
Abstract:
The rates of growth of output per head vary across countries. Despite the fact that these differences are of a small order of magnitude, they would translate into large differences in the average living standards of the countries if they were to persist over the years. It is therefore very important to understand the process of long run growth and as a consequence many recent studies concentrate on the issue of cross country convergence. The aim of this thesis is to investigate the process of growth across countries and the possibility of inter-relationships of these processes across countries. To this avail, an empirical analysis of per capita output across countries out first using the exact continuous time version of two neoclassical growth models, the Solow growth model and The Ramsey-Cass-Koopmans model. Results show that when these models are estimated consistently countries do not seem to be converging in the sense typically used in the literature. The rest of the thesis aims to investigate in more detail the processes by which growth in different countries might be related. Based on extensions of another neoclassical model, the Overlapping Generations model, and using a nonlinear switching regime model for estimation, two empirical analyses are carried out. The first one examines the role of balance of payments constraints in cross country growth determination. The second studies the extent of technology spillovers across countries and their effect on the process of growth. On one hand, results reveal little evidence of current account deficits constraining growth in the long run in the G7 countries although there is ample evidence of an influence in the short run dynamics of growth. On the other hand, spillovers of technology across the G7 countries are found to be of importance in the process of growth.
APA, Harvard, Vancouver, ISO, and other styles
29

Brown, John. "Spatial Allocation, Imputation, and Sampling Methods for Timber Product Output Data." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29147.

Full text
Abstract:
Data from the 2001 and 2003 timber product output (TPO) studies for Georgia were explored to determine new methods for handling missing data and finding suitable sampling estimators. Mean roundwood volume receipts per mill for the year 2003 were calculated using the methods developed by Rubin (1987). Mean receipts per mill ranged from 4.4 to 14.2 million ft3. The mean value of 9.3 million ft3 did not statistically differ from the NONMISS, SINGLE1, and SINGLE2 references means (p=.68, .75, and .76 respectively). Fourteen estimators were investigated to investigate sampling approaches, with estimators being of several means types (simple random sample, ratio, stratified sample, and combined ratio) as well as employing two methods for stratification (Dalenius-Hodges (DH) square root of the Frequency method and a cluster analysis method. Relative efficiency (RE) improved when the number of groups increased and when employing a ratio estimator, particularly a combined ratio. Neither the DH method nor the cluster analysis method performed better than the other. Six bound sizes (1, 5, 10, 15, 20, and 25 percent) were considered for deriving samples sizes for the total volume of roundwood. The minimum achievable bound size was found to be 10 percent of the total receipts volume for the DH-method using a two group stratification. This was true for both the stratified and combined ratio estimators. In addition, for the stratified and combined ratio estimators, only the DH method stratifications were able to reach a 10 percent bound on the total (6 of the 12 stratified estimators). The remaining six stratified estimators were able to achieve a 20 percent bound of the total. Finally, nonlinear repeated measures models were developed to spatially allocate mill receipts to surrounding counties in the event of obtaining only a mill's total receipt volume. A Gompertz model with a power spatial covariance was found to be the best performing when using road distances from the mills to either county center type (geographic or forest mass). These models utilized the cumulative frequency of mill receipts as the response variable, with cumulative frequencies based on distance from the mill to the county.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Aguiari, Davide. "Named Data Networking in IoT based sensor devices." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13297/.

Full text
Abstract:
In a world running on a “smart” vision, the Internet of Things (IoT) progress is going faster than ever. The term “things” is not just about computer, people and smartphone, but also sensors, refrigerators, vehicles, clothing, food and so on. Internet of Things is the possibility to provide an IP address for every item, so it will have an interface on the Internet network. The household devices will not just being commanded and monitored remotely then, but they will have an active main character role, establishing a communication network between them. The thesis will begin describing a general overview, the state of art, of the IoT world and of sensors networks, checking its potential and any restrictions, if present. Then, every engineering aspect of the realized project, will been described in detail. This thesis will also prove that nowadays we have the right items and components for the realization of reliable low-cost sensors. The ultimate purpose is to verify the introduction of new network protocols like NDN (Named Data Networking) to evaluate their performances and efficiency. Finally I will propose the simulations output obtained by NS3 (Network Simulator): a scenario simulation using NDNSim and ChronoSync application will be present.
APA, Harvard, Vancouver, ISO, and other styles
31

Holovach, M. "OS for data protection in modern tablet devices." Thesis, Sumy State University, 2014. http://essuir.sumdu.edu.ua/handle/123456789/45439.

Full text
Abstract:
Nowadays the tablet devices are steadily increasing in popularity with modern users. Because of their portability more and more people become getting used to them. The new possibilities make it easy to create and carry some data. As a consequence it needs some protection to secure our personal information.
APA, Harvard, Vancouver, ISO, and other styles
32

Mellet, Dieter Sydney-Charles. "An integrated continuous output linear power sensor using Hall effect vector multiplication." Diss., Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-09012005-120807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Graves, Jamie Robert. "Forensic verification of operating system activity via novel data, acquisition and analysis techniques." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/6699.

Full text
Abstract:
Digital Forensics is a nascent field that faces a number of technical, procedural and cultural difficulties that must be overcome if it is to be recognised as a scientific discipline, and not just an art. Technical problems involve the need to develop standardised tools and techniques for the collection and analysis of digital evidence. This thesis is mainly concerned with the technical difficulties faced by the domain. In particular, the exploration of techniques that could form the basis of trusted standards to scientifically verify data. This study presents a set of techniques, and methodologies that can be used to describe the fitness of system calls originating from the Windows NT platform as a form of evidence. It does so in a manner that allows for open investigation into the manner in which the activities described by this form of evidence can be verified. The performance impact on the Device Under Test (DUT) is explored via the division of the Windows NT system calls into service subsets. Of particular interest to this work is the file subset, as the system calls can be directly linked to user interaction. The subsequent quality of data produced by the collection tool is examined via the use of the Basic Local Alignment Search Tool (BLAST) sequence alignment algorithm . In doing so, this study asserts that system calls provide a recording, or time line, of evidence extracted from the operating system, which represents actions undertaken. In addition, it asserts that these interactions can be compared against known profiles (fingerprints) of activity using BLAST, which can provide a set of statistics relating to the quality of match, and a measure of the similarities of sequences under scrutiny. These are based on Karlin-Altschul statistics which provides, amongst other values, a P-Value to describe how often a sequence will occur within a search space. The manner in which these statistics are calculated is augmented by the novel generation of the NM1,5_D7326 scoring matrix based on empirical data gathered from the operating system, which is compared against the de facto, biologically generated, BLOSUM62 scoring matrix. The impact on the Windows 2000 and Windows XP DUTs of monitoring most of the service subsets, including the file subset, is statistically insignificant when simple user interactions are performed on the operating system. For the file subset, p = 0.58 on Windows 2000 Service Pack 4, and p = 0.84 on Windows XP Service Pack 1. This study shows that if the event occurred in a sequence that originated on an operating system that was not subjected to high process load or system stress, a great deal of confidence can be placed in a gapped match, using either the NM_I.5~7326 or BLOSUM62 scoring matrices, indicating an event occurred, as all fingerprints of interest (FOI) were identified. The worst-case BLOSUM62 P-Value = 1.10E-125, and worst-case NM1.5_D7326 P-Value = 1.60E-72, showing that these matrices are comparable in their sensitivity during normal system conditions. This cannot be said for sequences gathered during high process load or system stress conditions. The NM1.5_D7326 scoring matrix failed to identify any FOI. The BLOSUM62 scoring matrix returned a number of matches that may have been the FOI, as discerned via the supporting statistics, but were not positively identified within the evaluation criteria. The techniques presented in this thesis are useful, structured and quantifiable. They provide the basis for a set of methodologies that can be used for providing objective data for additional studies into this form of evidence, which can further explore the details of the calibration and analysis methods, thus supplying the basis for a trusted form of evidence, which may be described as fit-for-purpose.
APA, Harvard, Vancouver, ISO, and other styles
34

Hernańdez, Correa Evelio. "Control of nonlinear systems using input-output information." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/11176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Malone, Gwendolyn Joy. "Ranking and Selection Procedures for Bernoulli and Multinomial Data." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/7603.

Full text
Abstract:
Ranking and Selection procedures have been designed to select the best system from a number of alternatives, where the best system is defined by the given problem. The primary focus of this thesis is on experiments where the data are from simulated systems. In simulation ranking and selection procedures, four classes of comparison problems are typically encountered. We focus on two of them: Bernoulli and multinomial selection. Therefore, we wish to select the best system from a number of simulated alternatives where the best system is defined as either the one with the largest probability of success (Bernoulli selection) or the one with the greatest probability of being the best performer (multinomial selection). We focus on procedures that are sequential and use an indifference-zone formulation wherein the user specifies the smallest practical difference he wishes to detect between the best system and other contenders. We apply fully sequential procedures due to Kim and Nelson (2004) to Bernoulli data for terminating simulations, employing common random numbers. We find that significant savings in total observations can be realized for two to five systems when we wish to detect small differences between competing systems. We also study the multinomial selection problem. We offer a Monte Carlo simulation of the Bechhofer and Kulkarni (1984) MBK multinomial procedure and provide extended tables of results. In addition, we introduce a multi-factor extension of the MBK procedure. This procedure allows for multiple independent factors of interest to be tested simultaneously from one data source (e.g., one person will answer multiple independent surveys) with significant savings in total observations compared to the factors being tested in independent experiments (each survey is run with separate focus groups and results are combined after the experiment). Another multi-factor multinomial procedure is also introduced, which is an extension to the MBG procedure due to Bechhofer and Goldsman (1985, 1986). This procedure performs better that any other procedure to date for the multi-factor multinomial selection problem and should always be used whenever table values for the truncation point are available.
APA, Harvard, Vancouver, ISO, and other styles
36

Marks, Lori J. "Mid-tech Support Strategies for Students with Autism: Pairing Boardmaker with Simple Voice Output Devices." Digital Commons @ East Tennessee State University, 2004. https://dc.etsu.edu/etsu-works/3690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chinenyeze, Samuel Jaachimma. "Mango : a model-driven approach to engineering green Mobile Cloud Applications." Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/976572.

Full text
Abstract:
With the resource constrained nature of mobile devices and the resource abundant offerings of the cloud, several promising optimisation techniques have been proposed by the green computing research community. Prominent techniques and unique methods have been developed to offload resource/computation intensive tasks from mobile devices to the cloud. Most of the existing offloading techniques can only be applied to legacy mobile applications as they are motivated by existing systems. Consequently, they are realised with custom runtimes which incur overhead on the application. Moreover, existing approaches which can be applied to the software development phase, are difficult to implement (based on manual process) and also fall short of overall (mobile to cloud) efficiency in software qualityattributes or awareness of full-tier (mobile to cloud) implications. To address the above issues, the thesis proposes a model-driven architecturefor integration of software quality with green optimisation in Mobile Cloud Applications (MCAs), abbreviated as Mango architecture. The core aim of the architecture is to present an approach which easily integrates software quality attributes (SQAs) with the green optimisation objective of Mobile Cloud Computing (MCC). Also, as MCA is an application domain which spans through the mobile and cloud tiers; the Mango architecture, therefore, takesinto account the specification of SQAs across the mobile and cloud tiers, for overall efficiency. Furthermore, as a model-driven architecture, models can be built for computation intensive tasks and their SQAs, which in turn drives the development – for development efficiency. Thus, a modelling framework (called Mosaic) and a full-tier test framework (called Beftigre) were proposed to automate the architecture derivation and demonstrate the efficiency of Mango approach. By use of real world scenarios/applications, Mango has been demonstrated to enhance the MCA development process while achieving overall efficiency in terms of SQAs (including mobile performance and energy usage compared to existing counterparts).
APA, Harvard, Vancouver, ISO, and other styles
38

Seevinck, Jennifer. "Emergence in interactive art." Thesis, University of Technology, Sydney, 2011.

Find full text
Abstract:
This thesis is concerned with creating and evaluating interactive art systems that facilitate emergent participant experiences. For the purposes of this research, interactive art is the computer based arts involving physical participation from the audience, while emergence is when a new form or concept appears that was not directly implied by the context from which it arose. This emergent ‘whole’ is more than a simple sum of its parts. The research aims to develop understanding of the nature of emergent experiences that might arise during participant interaction with interactive art systems. It also aims to understand the design issues surrounding the creation of these systems. The approach used is Practice-based, integrating practice, evaluation and theoretical research. Practice used methods from Reflection-in-action and Iterative design to create two interactive art systems: Glass Pond and +-now. Creation of +-now resulted in a novel method for instantiating emergent shapes. Both art works were also evaluated in exploratory studies. In addition, a main study with 30 participants was conducted on participant interaction with +-now. These sessions were video recorded and participants were interviewed about their experience. Recordings were transcribed and analysed using Grounded theory methods. Emergent participant experiences were identified and classified using a taxonomy of emergence in interactive art. This taxonomy draws on theoretical research. The outcomes of this Practice-based research are summarised as follows. Two interactive art systems, where the second work clearly facilitates emergent interaction, were created. Their creation involved the development of a novel method for instantiating emergent shapes and it informed aesthetic and design issues surrounding interactive art systems for emergence. A taxonomy of emergence in interactive art was also created. Other outcomes are the evaluation findings about participant experiences, including different types of emergence experienced and the coding schemes produced during data analysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Tandon, Ashish. "Analysis and optimization of data storage using enhanced object models in the .NET framework." Thesis, Edinburgh Napier University, 2007. http://researchrepository.napier.ac.uk/Output/4047.

Full text
Abstract:
The purpose of thesis is to benchmark the database to examine and analyze the performance using the Microsoft COM+ the most commonly used component framework heavily used for developing component based applications. The prototype application based on Microsoft Visual C#.NET language used to benchmark the database performance on Microsoft .NET Framework environment 2.0 and 3.0 using the different sizes of data range from low (100 Rows) to high volume (10000 Rows) of data with five or ten number of users connections. There are different type of application used like COM+, Non-COM+ and .NET based application to show their performance on the different volume of data with specified numbers of user on the .NET Framework 2.0 and 3.0. The result has been analyzed and collected using the performance counter variables of an operating system and used Microsoft .NET class libraries which help in collecting system's level performance information as well. This can be beneficial to developers, stakeholders and management to decide the right technology to be used in conjunction with a database. The results and experiments conducted in this project results in the substantial gain in the performance, scalability and availability of component based application using the Microsoft COM+ features like object pooling, application pooling, role- based, transactions isolation and constructor enabled. The outcome of this project is that Microsoft COM+ component based application provides optimized database performance results using the SQL Server. There is a performance gain of at least 10% in the COM+ based application as compared to the Non COM+ based application. COM+ services features come at the performance penalty. It has been noticed that there is a performance difference between the COM+ based application and the application based on role based security, constructor enable and transaction isolation of around 15%, 20% and 35% respectively. The COM+ based application provides performance gain of around 15% and 45% on the low and medium volume of data on a .NET Framework 2.0 in comparison to 3.0. There is a significant gain in the COM+ Server based application on .NET Framework 3.0 of around 10% using high volume of data. This depicts that high volume of data application works better with Framework 3.0 as compared to 2.0 on SQL Server. The application performance type results represents that COM+ component based application provides better performance results over Non-COM+ and .NET based application. The difference between the performance of COM+ application based on low and medium volume of data was around 20% and 30%. .NET based application performs better on the high volume of data results in performance gain of around 10%. Similarly more over the same results provided on the test conducted on the MS Access. Where COM+ based application running under .NET Framework 2.0 performs better result other than the Non-COM+ and .NET based application on a low and medium volume of data and .NET Framework 3.0 based COM+ application performs better results on high volume of data.
APA, Harvard, Vancouver, ISO, and other styles
40

Meterelliyoz, Kuyzu Melike. "Variance parameter estimation methods with re-use of data." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26490.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Alexopoulos, Christos; Committee Co-Chair: Goldsman, David; Committee Member: Kim, Seong-Hee; Committee Member: Shapiro, Alexander; Committee Member: Spruill, Carl. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
41

Zorian, Yervant. "Optimized error coverage in built-in self-test by output data modification." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75778.

Full text
Abstract:
The concept of Built-In Self-Test (BIST) has recently become an increasingly attractive solution to the complex problem of testing VLSI chips. However, the realization of BIST faces some challenging problems of its own. One of these problems is to increase the quality of fault coverage of a BIST implementation, without incurring a large overhead. In particular, the loss of information in the output data compressor, which is typically a multi-input linear feedback shift register (MISR), is a major cause of concern.
In the recent past, several researchers have proposed different schemes to reduce this loss of information, while maintaining the need for a small area overhead.
In this dissertation, a new BIST scheme, based on modifying the output data before compression, is developed. This scheme, called output data modification (ODM), exploits the knowledge of the functionality of the circuit under test to provide a circuit-specific BIST structure. This structure is developed so that it can conveniently be implemented for any general circuit under consideration. But more importantly, a proof of effectiveness is provided to show that ODM will, on the average, be orders of magnitude better than all existing schemes in its capability to reduce the information loss, for a given amount of area overhead.
Moreover, the constructive nature of the proof will allow one to provide a simple trade-off between the reduction tolerated in information loss to the area overhead needed to affect this reduction.
APA, Harvard, Vancouver, ISO, and other styles
42

Luiro, V. (Vesa). "Acquisition and analysis of performance data for mobile devices." Doctoral thesis, University of Oulu, 2003. http://urn.fi/urn:isbn:9514271319.

Full text
Abstract:
Abstract Electronic industry is developing advanced and versatile products to satisfy customers' needs. It is also creating new needs, which expand the market further. This highly competitive field forces companies to produce continuously better, and hence more complex, products at an increasingly fast rate. This is particularly true of the mobile phone industry, which pursues higher volumes and penetration rates throughout the world. Very high volumes and extreme complexity require intensive research and a commitment to high product quality. Mobile phone manufacturers must commit themselves to strict quality standards and programs, which ultimately enable high customer satisfaction. Both quality assessment and product management generally need a method of feedback to be able to react to the manufactured output. This thesis concentrates on this aspect of feedback. A preliminary customer survey revealed that the information received directly from customers might not be accurate enough to be used as primary feedback data. The quality of the information varies notably and depends entirely on the customers' ability to perceive the relevant parameters. This also affects greatly their ability to communicate the information to the customer interface and then all the way back to the manufacturer. Based on the findings, end customers' average level of knowledge of mobile phone technology is fair [C]. Therefore, it is recommended that more accurate means should be developed for acquiring feedback data. Also, based on other research findings, it would be important to minimize human intervention and to make the flow of information as direct as possible. Based on previous research and the present findings, a concept was designed which satisfies the specific need for accurate feedback from the performance of mobile phones in the field. The interfaces providing data throughout the whole product life cycle were also analyzed in detail. And finally, the concept was implemented and piloted with a mobile phone manufacturer. The pilot studies showed that an improved feedback capability would benefit not only product quality, but also various functions of the company producing mobile devices. The increased knowledge of device performance obtained from the system can be utilized in, for example, testing, design, marketing, and management and also at all customer interfaces in the field.
APA, Harvard, Vancouver, ISO, and other styles
43

Mastrippolito, Luigi. "NETWORKED DATA ACQUISITION DEVICES AS APPLIED TO AUTOMOTIVE TESTING." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606740.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The US Army Aberdeen Test Center (ATC) is acquiring, transferring, and databasing data during all phases of automotive testing using networked data acquisition devices. The devices are small ruggedized computer-based systems programmed with specific data acquisition tasks and then networked together with other devices in order to share information within a test item or vehicle. One of the devices is also networked to a ground-station for monitor, control and data transfer of any of the devices on the net. Application of these devices has varied from single vehicle tests in a single geographical location up to a 100-vehicle nationwide test. Each device has a primary task such as acquiring data from vehicular data busses (MIL-STD-1553, SAE J1708 bus, SAE J1939 bus, RS-422 serial bus, etc.), GPS (time and position), analog sensors and video with audio. Each device has programmable options, maintained in a configuration file, that define the specific recording methods, real-time algorithms to be performed, data rates, and triggering parameters. The programmability of the system and bi-directional communications allow the configuration file to be modified remotely after the system is fielded. The primary data storage media of each device is onboard solid-state flash disk; therefore, a continuous communication link is not critical to data gathering. Data are gathered, quality checked and loaded into a database for analysis. The configuration file, as an integral part of the database, ensures configuration identity and management. A web based graphical user interface provides preprogrammed query options for viewing, summarizing, graphing, and consolidating data. The database can also be queried for more detailed analyses. The architecture for this network approach to field data acquisition was under the Aberdeen Test Center program Versatile Information System Integrated On-Line (VISION). This paper will describe how the merging of data acquisition systems to network communications and information management tools provides a powerful resource for system engineers, analysts, evaluators and acquisition personnel.
APA, Harvard, Vancouver, ISO, and other styles
44

Francis, Rita P. "Physician's acceptance of data from patient self-monitoring devices." Thesis, Capella University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10142170.

Full text
Abstract:

Due to the wide-scale adoption by the market and consumers of healthcare goods that track fitness, sleep, nutrition, and basic metabolic parameters through ubiquitous devices or mobile apps, it is vital to understand physicians’ attitudes towards consumer healthcare devices. No study had previously examined constructs related to technology acceptance and how they impacted behavioral intention for ubiquitous devices that produce SMD. A quantitative, non-experimental study was conducted to examine SMD acceptance, intent to use, and other factors important to physicians regarding SMD from ubiquitous devices. The researcher randomized the American Medical Association (AMA) membership list and sent out 5,000 invitations to physicians for participation. The final sample included 259 subjects, which consisted of 75.2% (N=194) male and 24.8% (N=64) female participants. The results from statistical analysis of the data gathered through survey methodology showed that the UTAUT2 constructs of performance expectancy, hedonic motivation, and price value were positively associated with the behavioral intention of SMD by physicians, while effort expectancy and social influence were not. Further, social influence was associated with use, while performance expectancy, effort expectancy, and hedonistic motivation were not. Major positive implications of the findings include contribution to the body of literature in the IT-healthcare arena regarding factors the influence technology acceptance and potential increase in the adoption of SMD among patients. Limitations of the study and recommendations for future research are discussed.

APA, Harvard, Vancouver, ISO, and other styles
45

Silva, Mário Jorge Marques da. "Mobile devices for electronic data capture in clinical studies." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14698.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
Mobile devices, including common smartphones and tablets, are being increasingly used for mHealth scenarios, in which the device is used to capture health values directly or acting as a hub for health sensors. Such applications allow a machine-to-machine capture and persistence of data, avoiding problems with manual data entry. The availability of smartphones and tablets, on one side, and wearable sensors/medical devices, on the other, creates an opportunity to use mobile data capture of health values also in clinical studies applications. In this dissertation, we propose a mobile front-end for clinical studies participants, developed in Android, including electronic data capture in ambulatory contexts. Besides the common questionnaire filling support, the front-end relies on the ISO/IEEE 11073 standard to directly obtain values from compliant medical devices. The work has been designed to integrate with the existing clinical studies platform uEDC (developed by iUZ Technologies). Early usage of the system shows that the mobile front-end can successfully support different devices and study protocols, fully integrated with the uEDC backend.
Os dispositivos móveis, incluido os comuns smartphones e tablets, estão a ser cada vez mais usados em cenários de mHealth, em que o dispositivo é usado para a recolha de dados médicos diretamente ou atuando como um agregador para sensores médicos. Tais aplicações permitem captura e persistência eletrónica dos dados, prevenindo problemas com a inserção manual. A disponibilidade de smartphones e tablets, por um lado, e sensores vestíveis/dispositivos médicos, por outro, cria uma oportunidade para usar a captura de dados de saúde com dispositivos móveis também em aplicações de estudos clínicos. Nesta dissertação, propomos uma applicação móvel para a participação em estudos clínicos, desenvolvida em Android, incluindo a captura eletrónica de dados em contextos ambulatórios. Além do suporte comum para preenchimento de questionários, a aplicação utiliza o protocolo ISO/IEEE 11073 para comunicar com dispositivos médicos compatíveis. O trabalho foi concebido para se integrar com a plataforma de estudos clínicos uEDC (desenvolvida pela iUZ Technologies). O uso experimental dos sistemas mostra que o front-end móvel consegue com sucesso suportar diferentes dispositivos e protocolos de estudo, totalmente integrados com o backend uEDC.
APA, Harvard, Vancouver, ISO, and other styles
46

Bui, Nhan Xuan 1958. "Seek reliability improvement in optical disk data storage devices." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/558160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nordwall, Jennifer. "Software Encryption in Mobile Devices." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-18656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mayisela, Simphiwe Hector. "Data-centric security : towards a utopian model for protecting corporate data on mobile devices." Thesis, Rhodes University, 2014. http://hdl.handle.net/10962/d1011094.

Full text
Abstract:
Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
APA, Harvard, Vancouver, ISO, and other styles
49

Phillips, Rhonda D. "A Probabilistic Classification Algorithm With Soft Classification Output." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/26701.

Full text
Abstract:
This thesis presents a shared memory parallel version of the hybrid classification algorithm IGSCR (iterative guided spectral class rejection), a novel data reduction technique that can be used in conjunction with PIGSCR (parallel IGSCR), a noise removal method based on the maximum noise fraction (MNF), and a continuous version of IGSCR (CIGSCR) that outputs soft classifications. All of the above are either classification algorithms or preprocessing algorithms necessary prior to the classification of high dimensional, noisy images. PIGSCR was developed to produce fast and portable code using Fortran 95, OpenMP, and the Hierarchical Data Format version 5 (HDF5) and accompanying data access library. The feature reduction method introduced in this thesis is based on the singular value decomposition (SVD). This feature reduction technique demonstrated that SVD-based feature reduction can lead to more accurate IGSCR classifications than PCA-based feature reduction. This thesis describes a new algorithm used to adaptively filter a remote sensing dataset based on signal-to-noise ratios (SNRs) once the maximum noise fraction (MNF) has been applied. The adaptive filtering scheme improves image quality as shown by estimated SNRs and classification accuracy improvements greater than 10%. The continuous iterative guided spectral class rejection (CIGSCR) classification method is based on the iterative guided spectral class rejection (IGSCR) classification method for remotely sensed data. Both CIGSCR and IGSCR use semisupervised clustering to locate clusters that are associated with classes in a classification scheme. This type of semisupervised classification method is particularly useful in remote sensing where datasets are large, training data are difficult to acquire, and clustering makes the identification of subclasses adequate for training purposes less difficult. Experimental results indicate that the soft classification output by CIGSCR is reasonably accurate (when compared to IGSCR), and the fundamental algorithmic changes in CIGSCR (from IGSCR) result in CIGSCR being less sensitive to input parameters that influence iterations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Day, Daniel John, and DDay@groupwise swin edu au. "Three-dimensional bit optical data storage in a photorefractive polymer." Swinburne University of Technology. Centre for Micro-Photonics, 2001. http://adt.lib.swin.edu.au./public/adt-VSWT20050307.112258.

Full text
Abstract:
As the computer industry grows, so will the requirements for data storage. Magnetic memory has been the most stable method in terms of capacity and recording/reading speed. However, we have reached the point where a substantial increase in the capacity cannot be produced without increasing the size of the system. When compact discs (CDs) were introduced in the 1980�s they revolutionized the concept of data storage. While the initial force behind compact discs could easily be said to be the music industry, once recordable and rewritable discs became available they quickly found more use in the computer industry as backup devices. Since their inception, the capacity requirements have far exceeded what is available on a compact disc, and they are now following the same path as magnetic memories. Following this trend, it could be assumed that digital versatile discs or digital video discs (DVDs) have a limited lifetime as a storage medium. In fact it has been noted (Higuchi et al., 1999) that the maximum capacity of digital video discs will be reached in 3 � 5 years. The question then is, what comes next? The efficiency of conventional optical data storage is extremely poor. For an optically thick recording medium, both CDs and DVDs use less than 0.01% of the total volume to store the information. Three-dimensional bit optical data storage endeavors to increase the efficiency by recording information in a volume that is greater than 90% of the total volume. The concept of three-dimensional bit optical data storage was first proposed by Parthenopoulos and Rentzepis in 1989, where they demonstrated that capacities far exceeding that of compact discs could be achieved. Three-dimensional bit optical data storage relies on creating a highly localised chemical or physical change within a recording medium, such that further layers can be recorded without causing interference. Ideally the chemical/physical change in the material should be reversible to enable erasable/rewritable data storage. In order to create a highly localised effect nonlinear excitation can be used; whereby the excitation is limited to a small region around the focal spot. Depending on the material and recording method there are several techniques for reading the information such as transmission imaging or reflection confocal microscopy. However, all the recording and reading methods require focusing to a deep position within a recording medium, such focusing encounters spherical aberration as a result of the difference in the refractive indices between the immersion and recording media. This thesis has concentrated on several areas to understand and develop the concept of three-dimensional bit optical data storage. The photorefractive effect in crystals has been studied for many years and is now widely used in optoelectronic devices. The use of photorefractive polymers is a relatively new and exciting development in optical data storage. Until now they have been used solely in the area of holographic data storage. The research in this thesis was conducted using photorefractive materials that were fabricated in two polymer matrices, poly(N-vinylcarbazole) (PVK) and poly(Methyl Methacrylate) (PMMA). The recording samples also consisted of the following compounds in various proportions, 2,5-dimethyl-4-(p-nirtophenylazo)anisole (DMNPAA), 2,4,7-trinitro-9-fluorenone (TNF) and N-ethylcarbazole (ECZ). In this project two-photon excitation was used as the recording mechanism to achieve erasable/rewritable data storage in a photorefractive polymer. As a result of two-photon excitation, the quadratic dependence of excitation on the incident intensity produces an excitation volume that is confined to the focal region in both the transverse and axial directions. Therefore, focusing the laser beam above or below its previous position provides a method by which layers of information can be recorded in the depth direction of a material, without causing interference from neighbouring layers. The feasibility of two-photon excitation in photorefractive polymers is demonstrated in this thesis. The quadratic relationship between excitation and incident light in two-photon excitation requires high photon density to ensure efficient excitation. The use of ultra-short pulsed lasers, while effective, is not a practical solution for an optical data storage system. This thesis demonstrates the ability to produce three-dimensional erasable/rewritable data storage in a photorefractive polymer using continuous wave illumination. Using this technology it has been possible to achieve a density of 88 Gbits/cm3, which corresponds to a capacity of 670 Gbytes on a compact disc sized recording medium. This is an increase of 1000 times the capacity of a CD and 130 times the capacity of current DVDs. While erasable optical data storage is an exciting prospect there are problems associated with the deterioration of the information. For long term information storage a permanent recording process would be more practical. It is demonstrated that there is a point after which further increases in the recording power result in the formation of a micro-cavity. While two-photon excitation is the recording method for erasable data storage, the increase in power results in an increase in ultra-violet absorption such that multi-photon excitation may occur. This thesis demonstrates the ability to record multi-layered arrays of micro-cavities. The change in refractive index associated with an erasable bit is less than 1%. As a result only phase sensitive reading methods (transmission imaging or differential interference contrast (DIC) microscopy) can be used to image a recorded bit. Both transmission and DIC imaging systems have poor axial resolution and therefore limit the density of the recording system, as well as being large optical systems. The introduction of a split or quadrant detector reduces the size of the optical reading system and is demonstrated to be sensitive enough to detect the phase changes of a recorded bit. However, the change in refractive index across a micro-cavity is large enough that reflection confocal microscopy can be used to detect a bit. It is demonstrated in this thesis that multi-layered micro-cavity arrays can be read using reflection confocal microscopy. Focusing of light to deep positions within an optical thick recording medium has the effect of increasing spherical aberration resulting from the refractive index mismatching between the immersion and recording media. The work in this thesis illustrates the effect of spherical aberration on the performance of both the recording and reading systems. The work conducted in this thesis shows the ability to record multi-layered erasable/rewritable information in a photorefractive polymer using pulsed and continuous wave two-photon excitation. It has also been demonstrated that through multi-photon excitation multi-layered micro-cavity arrays can be fabricated. It has also been illustrated that while spherical aberration deteriorates the performance of the recording and reading systems it is possible to achieve a density of greater than 88 Gbits/cm3.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography