Dissertations / Theses on the topic 'Computer systems'

To see the other types of publications on this topic, follow the link: Computer systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Merritt, John W. "Distributed file systems in an authentication system." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shone, N. "Detecting misbehaviour in a complex system-of-systems environment." Thesis, Liverpool John Moores University, 2014. http://researchonline.ljmu.ac.uk/4537/.

Full text
Abstract:
Modern systems are becoming increasingly complex, integrated and distributed, in order to meet the escalating demands for functionality. This has given rise to concepts such as system-of-systems (SoS), which organise a myriad of independent component systems into a collaborative super-system, capable of achieving unmatchable levels of functionality. Despite its advantages, SoS is still an infantile concept with many outstanding security concerns, including the lack of effective behavioural monitoring. This can be largely attributed to its distributed, decentralised and heterogeneous nature, which poses many significant challenges. The uncertainty and dynamics of both the SoS’s structure and function poses further challenges to overcome. Due to the unconventional nature of a SoS, existing behavioural monitoring solutions are often inadequate as they are unable to overcome these challenges. This monitoring deficiency can result in the occurrence of misbehaviour, which is one of the most serious yet underestimated security threats facing SoSs and their components. This thesis presents a novel misbehaviour detection framework specifically developed for operation in a SoS environment. By combining the use of uniquely calculated behavioural threshold profiles and periodic threshold adaptation, the framework is able to cope with monitoring the dynamic behaviour and suddenly occurring changes that affect threshold reliability. The framework improves SoS contribution and monitoring efficiency by controlling monitoring observations using statecharts, which react to the level of behavioural threat perceived by the system. The accuracy of behavioural analysis is improved by using a novel algorithm to quantify detected behavioural abnormalities, in terms of their level of irregularity. The framework utilises collaborative behavioural monitoring to increase the accuracy of the behavioural analysis, and to combat the threat posed by training based attacks to the threshold adaptation process. The validity of the collaborative behavioural monitoring is assured by using the novel behavioural similarity assessment algorithm, which selects the most behaviourally appropriate SoS components to collaborate with. The proposed framework and its subsequent techniques are evaluated via numerous experiments. These examine both the limitations and relative merits when compared to monitoring solutions and techniques from similar research areas. The results of these conclude that the framework is able to offer misbehaviour monitoring in a SoS environment, with increased efficiency and reduced false positive rates, false negative rates, resource usage and run-time requirements.
APA, Harvard, Vancouver, ISO, and other styles
3

Paterson, Colin Alexander. "Computer controlled suspension systems." Thesis, Coventry University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Harbison, William Samuel. "Trusting in computer systems." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thulnoon, A. A. T. "Efficient runtime security system for decentralised distributed systems." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/9043/.

Full text
Abstract:
Distributed systems can be defined as systems that are scattered over geographical distances and provide different activities through communication, processing, data transfer and so on. Thus, increasing the cooperation, efficiency, and reliability to deal with users and data resources jointly. For this reason, distributed systems have been shown to be a promising infrastructure for most applications in the digital world. Despite their advantages, keeping these systems secure, is a complex task because of the unconventional nature of distributed systems which can produce many security problems like phishing, denial of services or eavesdropping. Therefore, adopting security and privacy policies in distributed systems will increase the trustworthiness between the users and these systems. However, adding or updating security is considered one of the most challenging concerns and this relies on various security vulnerabilities which existing in distributed systems. The most significant one is inserting or modifying a new security concern or even removing it according to the security status which may appear at runtime. Moreover, these problems will be exacerbated when the system adopts the multi-hop concept as a way to deal with transmitting and processing information. This can pose many significant security challenges especially if dealing with decentralized distributed systems and the security must be furnished as end-to-end. Unfortunately, existing solutions are insufficient to deal with these problems like CORBA which is considered a one-to-one relationship only, or DSAW which deals with end-to-end security but without taking into account the possibility of changing information sensitivity during runtime. This thesis provides a proposed mechanism for enforcing security policies and dealing with distributed systems’ security weakness in term of the software perspective. The proposed solution utilised Aspect-Oriented Programming (AOP), to address security concerns during compilation and running time. The proposed solution is based on a decentralized distributed system that adopts the multi-hop concept to deal with different requested tasks. The proposed system focused on how to achieve high accuracy, data integrity and high efficiency of the distributed system in real time. This is done through modularising the most efficient security solutions, Access Control and Cryptography, by using Aspect-Oriented Programming language. The experiments’ results show the proposed solution overcomes the shortage of the existing solutions by fully integrating with the decentralized distributed system to achieve dynamic, high cooperation, high performance and end-to-end holistic security.
APA, Harvard, Vancouver, ISO, and other styles
6

Perucic, Michele. "Performance analysis of computer systems." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33020.

Full text
Abstract:
With an ever-growing and more productive computer industry, the performance of computer systems has become a major concern. Problems related to computer performance usually occur either because the system is not correctly sized or because its resources are not adequately allocated.
Performance analysis of computer systems is the process of evaluating the current performance of a system by monitoring and studying its behavior under different loads. It involves a deep understanding of the functioning of the basic components of a system. Performance analysis is typically followed by performance tuning, in which required changes are applied to the system in order to achieve optimum performance.
In this thesis, we discuss the basics of performance analysis. The different resources of a system are described and an overview of performance-monitoring tools for these resources is presented. An application of performance analysis is also included: two new major systems at McGill University are analyzed (the library management system ALEPH and the finance system BANNER).
APA, Harvard, Vancouver, ISO, and other styles
7

Styne, Bruce Alan. "Management systems for computer graphics." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xiao, Cheng. "Computer simulation of fluid systems." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anderson, Thomas R. "Computer modelling of agroforestry systems." Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/13429.

Full text
Abstract:
The potential of agroforestry in the British uplands depends largely on the ability of system components to efficiently use resources for which they compete. A typical system would comprise conifers planted at wide spacing, with sheep grazing pasture beneath. Computer models were developed to investigate the growth of trees and pasture in a British upland agroforest system, assuming that growth is primarily a function of light intercepted. Some of the implications of growing trees at wide spacing compared to conventional spacings, and the impact of trees on the spatial and annual production of pasture, were examined. Competition for environmental resources between trees and pasture was assumed to be exclusively for light: below-ground interactions were ignored. Empirical methods were used to try and predict timber production in agroforest stands based on data for conventional forest stands, and data for widely-spaced radiata pine grown in South Africa. These methods attempted to relate stem volume increment to stand density, age, and derived competition measures. Inadequacy of the data base prevented successful extrapolation of growth trends of British stands, although direct extrapolation of the South African data did permit predictions to be made. A mechanistic individual-tree growth model was developed, both to investigate the mechanisms of tree growth at wide spacings, and to provide an interface for a pasture model to examine pasture growth under the shading conditions imposed by a tree canopy. The process of light interception as influenced by radiation geometry and stand architecture was treated in detail. Other features given detailed consideration include carbon partitioning, respiration, the dynamics of foliage and crown dimensions, and wood density within tree stems. The predictive ability of the model was considered poor, resulting from inadequate knowledge and data on various aspects of tree growth. The model highlighted the need for further research into the dynamics of crown dimensions, foliage dynamics, carbon partitioning patterns and wood density within stems, and how these are affected by wide spacing. A pasture model was developed to investigate growth beneath the heterogeneous light environment created by an agroforest tree canopy. Pasture growth was closely related to light impinging on the crop, with temperature having only a minor effect. The model highlighted the fact that significant physiological adaptation (increased specific leaf area, decreased carbon partitioned below-ground and changes in the nitrogen cycle) is likely to occur in pasture shaded by a tree canopy.
APA, Harvard, Vancouver, ISO, and other styles
10

Battisti, Anna. "Computer Simulation of Biological Systems." Doctoral thesis, Università degli studi di Trento, 2012. https://hdl.handle.net/11572/368453.

Full text
Abstract:
This thesis investigates two biological systems using atomistic modelling and molecular dynamics simulation. The work is focused on: (a) the study of the interaction between a segment of a DNA molecule and a functionalized surface; (b) the dynamical modelling of protein tau, an intrinsically disordered protein. We briefly describe here the two problems; for their detailed introduction we refer respectively to chapter DNA and chapter TAU. The interest in the study of the adsorption of DNA on functionalized surfaces is related to the considerable effort that in recent years has been devoted in developing technologies for faster and cheaper genome sequencing. In order to sequence a DNA molecule, it has to be extracted from the cell where it is stored (e.g. the blood cells). As a consequence any genomic analysis requires a purification process in order to remove from the DNA molecule proteins, lipids and any other contaminants. The extraction and purification of DNA from biological samples is hence the first step towards an efficient and cheap genome sequencing. Using the chemical and physical properties of DNA it is possible to generate an attractive interaction between this macromolecule and a properly treated surface. Once positioned on the surface, the DNA can be more easily purified. In this work we set up a detailed molecular model of DNA interacting with a surface functionalized with amino silanes. The intent is to investigate the free energy of adsorption of small DNA oligomers as a function of the pH and ionic strength of the solution. The tau protein belongs to the category of Intrinsically Disordered Proteins (IDP), which in their native state do not have an average stable structure and fluctuate between many conformations. In its physiological state, tau protein helps nucleating and stabilizing the microtubules in the axons of the neurons. On the other hand, the same tau - in a pathological aggregation - is involved in the development of the Alzheimer disease. IDPs do not have a definite 3D structure, therefore their dynamical simulation cannot start from a known list of atomistic positions, like a protein data bank file. We first introduce a procedure to find an initial dynamical state for a generic IDP, and we apply it to the tau protein. We then analyze the dynamical properties of tau, like the propensity of residues to form temporary secondary structures like beta-sheets or alpha-helices.
APA, Harvard, Vancouver, ISO, and other styles
11

Battisti, Anna. "Computer Simulation of Biological Systems." Doctoral thesis, University of Trento, 2012. http://eprints-phd.biblio.unitn.it/689/1/Tesi-PhD.pdf.

Full text
Abstract:
This thesis investigates two biological systems using atomistic modelling and molecular dynamics simulation. The work is focused on: (a) the study of the interaction between a segment of a DNA molecule and a functionalized surface; (b) the dynamical modelling of protein tau, an intrinsically disordered protein. We briefly describe here the two problems; for their detailed introduction we refer respectively to chapter DNA and chapter TAU. The interest in the study of the adsorption of DNA on functionalized surfaces is related to the considerable effort that in recent years has been devoted in developing technologies for faster and cheaper genome sequencing. In order to sequence a DNA molecule, it has to be extracted from the cell where it is stored (e.g. the blood cells). As a consequence any genomic analysis requires a purification process in order to remove from the DNA molecule proteins, lipids and any other contaminants. The extraction and purification of DNA from biological samples is hence the first step towards an efficient and cheap genome sequencing. Using the chemical and physical properties of DNA it is possible to generate an attractive interaction between this macromolecule and a properly treated surface. Once positioned on the surface, the DNA can be more easily purified. In this work we set up a detailed molecular model of DNA interacting with a surface functionalized with amino silanes. The intent is to investigate the free energy of adsorption of small DNA oligomers as a function of the pH and ionic strength of the solution. The tau protein belongs to the category of Intrinsically Disordered Proteins (IDP), which in their native state do not have an average stable structure and fluctuate between many conformations. In its physiological state, tau protein helps nucleating and stabilizing the microtubules in the axons of the neurons. On the other hand, the same tau - in a pathological aggregation - is involved in the development of the Alzheimer disease. IDPs do not have a definite 3D structure, therefore their dynamical simulation cannot start from a known list of atomistic positions, like a protein data bank file. We first introduce a procedure to find an initial dynamical state for a generic IDP, and we apply it to the tau protein. We then analyze the dynamical properties of tau, like the propensity of residues to form temporary secondary structures like beta-sheets or alpha-helices.
APA, Harvard, Vancouver, ISO, and other styles
12

Gregory, Frank Hutson. "A logical analysis of soft systems modelling : implications for information system design and knowledge based system design." Thesis, University of Warwick, 1993. http://wrap.warwick.ac.uk/2888/.

Full text
Abstract:
The thesis undertakes an analysis of the modelling methods used in the Soft Systems Methodology (SSM) developed by Peter Checkland and Brian Wilson. The analysis is undertaken using formal logic and work drawn from modern Anglo-American analytical philosophy especially work in the area of philosophical logic, the theory of meaning, epistemology and the philosophy of science. The ability of SSM models to represent causation is found to be deficient and improved modelling techniques suitable for cause and effect analysis are developed. The notional status of SSM models is explained in terms of Wittgenstein's language game theory. Modal predicate logic is used to solve the problem of mapping notional models on to the real world. The thesis presents a method for extending SSM modelling in to a system for the design of a knowledge based system. This six stage method comprises: systems analysis, using SSM models; language creation, using logico-linguistic models; knowledge elicitation, using empirical models; knowledge representation, using modal predicate logic; codification, using Prolog; and verification using a type of non-monotonic logic. The resulting system is constructed in such a way that built in inductive hypotheses can be falsified, as in Karl Popper's philosophy of science, by particular facts. As the system can learn what is false it has some artificial intelligence capability. A variant of the method can be used for the design of other types of information system such as a relational database.
APA, Harvard, Vancouver, ISO, and other styles
13

Abdlhamed, M. "Intrusion prediction system for cloud computing and network based systems." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/8897/.

Full text
Abstract:
Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method.
APA, Harvard, Vancouver, ISO, and other styles
14

Redfern, Ian Douglas. "Automatic coset systems." Thesis, University of Warwick, 1993. http://wrap.warwick.ac.uk/56939/.

Full text
Abstract:
This thesis describes the theory of automatic coset systems. These provide a simple and economical way of describing a system of co- sets in a group with respect to a subgroup, such as the cosets of the stabiliser of an object under a group of transformations. An automatic coset system possesses a finite state automaton that provides a name for each coset, and a set of finite state automata that allow these cosets to be multiplied by group generators. An algorithm is given that will produce a certain type of automatic coset system, should one exist, from a description of the group and subgroup. The type of system produced has the advantage that it names each coset uniquely using as short a name as possible. This makes it particularly useful for coset enumeration, and several ex- amples of its use are given in an appendix. Two theorems are also proved: the property of being an automatic coset system is independent of the generating set chosen, and quasiconvex subgroups of hyperbolic groups have automatic coset sys- tems.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhao, G. F. "A real-time messaging system for distributed computer control systems." Thesis, Swansea University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636733.

Full text
Abstract:
The OSI-based data communication standards, and particularly the MAP-selected profile, have provided a genuine, vendor-independent communication environment, crucial to multiple computer-based control applications. One of the vital components in the Application Layer of these standards, aimed directly at supporting communications between programmable devices in manufacturing environments, is the 'Manufacturing Message Specification' (MMS). Although MMS has been acknowledged as vitally important in standardizing communications between shop-floor devices, and despite having been developed specifically for industrial environments, it does not have the capacity to support directly the time-critical services which are vital to real-time applications. Effort, therefore, needs to be expended in extending the current MMS concepts to fulfil real-time requirements. This thesis is dedicated to tackling this problem. It first analyzes the real-time requirements and the related OSI-based standards, especially MMS. Then it proposes services and functions which should be included within both the inherent OSI supporting structures and MMS itself, in order to fulfil these real-time requirements. The thesis also provides background comments on the support required from the associated computer architectures. Finally, it reviews a prototype implementation of the proposals and analyzes the results obtained. The original contribution of this work lies in the proposed extensions to the core MMS proposal - these being based on a fundamentally radical architecture, which it is suggested, is necessary to support a genuine real-time distributed computer control system.
APA, Harvard, Vancouver, ISO, and other styles
16

Ahmad, Farooq. "An expert system for computer-aided design of control systems." Thesis, University of Strathclyde, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ferguson, John Urquhart. "Mutually reinforcing systems." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2760/.

Full text
Abstract:
Human computation can be described as outsourcing part of a computational process to humans. This technique might be used when a problem can be solved better by humans than computers or it may require a level of adaptation that computers are not yet capable of handling. This can be particularly important in changeable settings which require a greater level of adaptation to the surrounding environment. In most cases, human computation has been used to gather data that computers struggle to create. Games with by-products can provide an incentive for people to carry out such tasks by rewarding them with entertainment. These are games which are designed to create a by-product during the course of regular play. However, such games have traditionally been unable to deal with requests for specific data, relying instead on a broad capture of data in the hope that it will cover specific needs. A new method is needed to focus the efforts of human computation and produce specifically requested results. This would make human computation a more valuable and versatile technique. Mutually reinforcing systems are a new approach to human computation that tries to attain this focus. Ordinary human computation systems tend to work in isolation and do not work directly with each other. Mutually reinforcing systems are an attempt to allow multiple human computation systems to work together so that each can benefit from the other's strengths. For example, a non-game system can request specific data from a game. The game can then tailor its game-play to deliver the required by-products from the players. This is also beneficial to the game because the requests become game content, creating variety in the game-play which helps to prevent players getting bored of the game. Mobile systems provide a particularly good test of human computation because they allow users to react to their environment. Real world environments are changeable and require higher levels of adaptation from the users. This means that, in addition to the human computation required by other systems, mobile systems can also take advantage of a user's ability to apply environmental context to the computational task. This research explores the effects of mutually reinforcing systems on mobile games with by-products. These effects will be explored by building and testing mutually reinforcing systems, including mobile games. A review of existing literature, human computation systems and games with by-products will set out problems which exist in outsourcing parts of a computational process to humans. Mutually reinforcing systems are presented as one approach of addressing some of these problems. Example systems have been created to demonstrate the successes and failures of this approach and their evolving designs have been documented. The evaluation of these systems will be presented along with a discussion of the outcomes and possible future work. A conclusion will summarize the findings of the work carried out. This dissertation shows that extending human computation techniques to allow the collection and classification of useful contextual information in mobile environments is possible and can be extended to allow the by-products to match the specific needs of another system.
APA, Harvard, Vancouver, ISO, and other styles
18

Tosun, Suleyman. "Reliability-centric system design for embedded systems." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2005. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Barry S. "Integrated inspection system in manufacturing : vision systems /." Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-04272010-020147/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Vestlund, Christian. "Threat Analysis on Vehicle Computer Systems." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-53661.

Full text
Abstract:

Vehicles have been around in our society for over a century, until recently they have been standalone systems. With increased amounts of initiatives to inter-network vehicles to avoid accidents and reduce environmental impact the view of a vehicle as a standalone system needs to be reconsidered. Networking and cooperation between vehicles requires that all systems and the information therein are trustworthy. Faulty or malicious vehicle systems are thus not limited to only affecting a single vehicle but also the entire network. The detection of anomalous behavior in a vehicle computer system is therefore of importance. To improve the vehicle systems we strive to achieve security awareness within the vehicle computer system. As a first step we will identify threats toward the vehicle computer system and what has been done to address them.

We perform a threat analysis consisting of fault trees and misuse cases to identify the threats. The fault trees provide away to connect the threats found with vehicle stakeholders' goals. The connection between stakeholder goals and threat highlights the need for threat mitigation.

Several research initiatives are discussed to find out what has been done to address the identified threats and to find the state of the research for security in vehicle computer system.

Lastly, an error model for the Controller Area Network (CAN) is proposed to model the consequences of threats applied to the CAN bus.

APA, Harvard, Vancouver, ISO, and other styles
21

Bouyer, Maouen Abdelkarim. "Computer simulations of alkane-zeolite systems." Doctoral thesis, Universite Libre de Bruxelles, 1998. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Blakes, Jonathan. "Infobiotics : computer-aided synthetic systems biology." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13434/.

Full text
Abstract:
Until very recently Systems Biology has, despite its stated goals, been too reductive in terms of the models being constructed and the methods used have been, on the one hand, unsuited for large scale adoption or integration of knowledge across scales, and on the other hand, too fragmented. The thesis of this dissertation is that better computational languages and seamlessly integrated tools are required by systems and synthetic biologists to enable them to meet the significant challenges involved in understanding life as it is, and by designing, modelling and manufacturing novel organisms, to understand life as it could be. We call this goal, where everything necessary to conduct model-driven investigations of cellular circuitry and emergent effects in populations of cells is available without significant context-switching, “one-pot” in silico synthetic systems biology in analogy to “one-pot” chemistry and “one-pot” biology. Our strategy is to increase the understandability and reusability of models and experiments, thereby avoiding unnecessary duplication of effort, with practical gains in the efficiency of delivering usable prototype models and systems. Key to this endeavour are graphical interfaces that assists novice users by hiding complexity of the underlying tools and limiting choices to only what is appropriate and useful, thus ensuring that the results of in silico experiments are consistent, comparable and reproducible. This dissertation describes the conception, software engineering and use of two novel software platforms for systems and synthetic biology: the Infobiotics Workbench for modelling, in silico experimentation and analysis of multi-cellular biological systems; and DNA Library Designer with the DNALD language for the compact programmatic specification of combinatorial DNA libraries, as the first stage of a DNA synthesis pipeline, enabling methodical exploration biological problem spaces. Infobiotics models are formalised as Lattice Population P systems, a novel framework for the specification of spatially-discrete and multi-compartmental rule-based models, imbued with a stochastic execution semantics. This framework was developed to meet the needs of real systems biology problems: hormone transport and signalling in the root of Arabidopsis thaliana, and quorum sensing in the pathogenic bacterium Pseudomonas aeruginosa. Our tools have also been used to prototype a novel synthetic biological system for pattern formation, that has been successfully implemented in vitro. Taken together these novel software platforms provide a complete toolchain, from design to wet-lab implementation, of synthetic biological circuits, enabling a step change in the scale of biological investigations that is orders of magnitude greater than could previously be performed in one in silico “pot”.
APA, Harvard, Vancouver, ISO, and other styles
23

Soukup, Michael. "Brain-Computer Interface In Control Systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-25749.

Full text
Abstract:
A Brain-Computer Interface (BCI) is a system that allows for direct communication between the brain and an external device. Originally, the motivation for developing BCIs has been to provide severely disabled individuals with a basic communication system. Recent years, BCIs directed at regular consumers in practical control applications have gained popularity as well, for which the ultimate goal is to provide a more natural way of communicating with machines. However, BCIs intended at use in control systems face several challenges and are still inferior to conventional controllers in terms of usability, reliability and practical value. In this thesis, we explore two novel concepts that can enforce BCIs. The first concept relies on detection of so-called Error-Related Potentials (ErrPs), which are the response in brainwaves to an erroneous event. We argue for that these potentials can serve as reward-based signals that give feedback to the system, which enables the BCI to adapt to the user. The second concept is to use sequence labeling frameworks based on Conditional Random Fields (CRFs) to translate brainwaves into control signals with greater accuracy. We also suggest how these two concepts can be combined.Our experiments to detect ErrPs in BCI control applications using a consumer grade headset to obtain EEG measurements indicate no presence of ErrPs, however, the reliability of the EEG recordings is questionable. Furthermore, we have developed a new implementation of the so-called Sparse Hidden-Dynamic CRF (SHDCRF) and measure its performance on a common BCI classification task. In our experiment, the model outperforms similar classifiers that represent the state-of-the-art, and the results suggest that the proposed model is superior in terms of accuracy and modeling capacity.
APA, Harvard, Vancouver, ISO, and other styles
24

Afzal, Tahir Mahmood. "Load sharing in distributed computer systems." Thesis, University of Newcastle Upon Tyne, 1987. http://hdl.handle.net/10443/2066.

Full text
Abstract:
In this thesis the problem of load sharing in distributed computer systems is investigated. Fundamental issues that need to be resolved in order to implement a load sharing scheme in a distributed system are identified and possible solutions suggested. A load sharing scheme has been designed and implemented on an existing Unix United system. The performance of this load sharing scheme is then measured for different types of programs. It is demonstrated that a load sharing scheme can be implemented on the Unix United systems using the existing mechanisms provided by the Newcastle Connection, and without making any significant changes to the existing software. It is concluded that under some circumstances a substantial improvement in the system performance can be obtained by the load sharing scheme.
APA, Harvard, Vancouver, ISO, and other styles
25

Colley, B. A. "Computer simulation of marine traffic systems." Thesis, University of Plymouth, 1985. http://hdl.handle.net/10026.1/2223.

Full text
Abstract:
A computer model was constructed that allowed two vessels involved in a possible collision situation to take collision avoidance action following the "International Regulations for Preventing Collisions at Sea". The mariners’ actions were modelled by the concepts of the domain and the RDRR (Range to Domain/Range-rate). The domain was used to determine if a vessel was threatening and the RDRR to determine the time at which a vessel should give-way to a threatening target. Each vessel in the simulation had four domains corresponding to the type of encounter in which the vessel was involved. Values for the time at which a vessel manoeuvres and the domain radii were determined from an analysis of high quality cine films of the radar at H.M. Coastguard at St. Margaret's Bay, Dover. Information was also taken from simulator exercises set up on the Polytechnic radar simulator. The two ship encounter was then developed to become the multi-ship encounter and eventually was able to model over 400 vessels over a two day period through a computer representation of the Dover Strait. A further development included a computer graphical representation of a radar simulator running in real-time, and which allowed a mariner to navigate one of the vessels using computer control. A validation of the computer model was undertaken by comparing the simulated results with those observed from the cine films. Following the validation several examples of the computer model being used as a decision support system were included.
APA, Harvard, Vancouver, ISO, and other styles
26

Goldthorp, Mark. "Computer simulation of biomass energy systems." Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wolff, Josephine Charlotte Paulina. "Classes of defense for computer systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99535.

Full text
Abstract:
Thesis: Ph. D. in Technology, Management and Policy, Massachusetts Institute of Technology, Engineering Systems Division, Technology, Management, and Policy Program, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 175-181).
Computer security incidents often involve attackers acquiring a complex sequence of escalating capabilities and executing those capabilities across a range of different intermediary actors in order to achieve their ultimate malicious goals. However, popular media accounts of these incidents, as well as the ensuing litigation and policy proposals, tend to focus on a very narrow defensive landscape, primarily individual centralized defenders who control some of the capabilities exploited in the earliest stages of these incidents. This thesis proposes two complementary frameworks for defenses against computer security breaches -- one oriented around restricting the computer-based access capabilities that adversaries use to perpetrate those breaches and another focused on limiting the harm that those adversaries ultimately inflict on their victims. Drawing on case studies of actual security incidents, as well as the past decade of security incident data at MIT, it analyzes security roles and defense design patterns related to these broad classes of defense for application designers, administrators, and policy-makers. Application designers are well poised to undertake access defense by defining and distinguishing malicious and legitimate forms of activity in the context of their respective applications. Policy-makers can implement some harm limitation defenses by monitoring and regulating money flows, and also play an important role in collecting the data needed to expand understanding of the sequence of events that lead up to successful security incidents and inform which actors can and should effectively intervene as defenders. Organizations and administrators, meanwhile, occupy an in-between defensive role that spans both access and harm in addressing digital harms, or harms that are directly inflicted via computer capabilities, through restrictions on crucial intermediate harms and outbound information flows. The comparative case analysis ultimately points to a need to broaden defensive roles and responsibilities beyond centralized access defense and defenders, as well as the visibility challenges compounding externalities for defenders who may lack not only the incentives to intervene in such incidents but also the necessary knowledge to figure out how best to intervene.
by Josephine Wolff.
Ph. D. in Technology, Management and Policy
APA, Harvard, Vancouver, ISO, and other styles
28

Jazaa, Abid Thyab. "Computer-aided mangement of ADA systems." Thesis, Keele University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Zhenyu. "Discovering New Vulnerabilities in Computer Systems." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539623356.

Full text
Abstract:
Vulnerability research plays a key role in preventing and defending against malicious computer system exploitations. Driven by a multi-billion dollar underground economy, cyber criminals today tirelessly launch malicious exploitations, threatening every aspect of daily computing. to effectively protect computer systems from devastation, it is imperative to discover and mitigate vulnerabilities before they fall into the offensive parties' hands. This dissertation is dedicated to the research and discovery of new design and deployment vulnerabilities in three very different types of computer systems.;The first vulnerability is found in the automatic malicious binary (malware) detection system. Binary analysis, a central piece of technology for malware detection, are divided into two classes, static analysis and dynamic analysis. State-of-the-art detection systems employ both classes of analyses to complement each other's strengths and weaknesses for improved detection results. However, we found that the commonly seen design patterns may suffer from evasion attacks. We demonstrate attacks on the vulnerabilities by designing and implementing a novel binary obfuscation technique.;The second vulnerability is located in the design of server system power management. Technological advancements have improved server system power efficiency and facilitated energy proportional computing. However, the change of power profile makes the power consumption subjected to unaudited influences of remote parties, leaving the server systems vulnerable to energy-targeted malicious exploit. We demonstrate an energy abusing attack on a standalone open Web server, measure the extent of the damage, and present a preliminary defense strategy.;The third vulnerability is discovered in the application of server virtualization technologies. Server virtualization greatly benefits today's data centers and brings pervasive cloud computing a step closer to the general public. However, the practice of physical co-hosting virtual machines with different security privileges risks introducing covert channels that seriously threaten the information security in the cloud. We study the construction of high-bandwidth covert channels via the memory sub-system, and show a practical exploit of cross-virtual-machine covert channels on virtualized x86 platforms.
APA, Harvard, Vancouver, ISO, and other styles
30

Grossman, Michael D. "A computer simulation of processor scheduling in UNIX 4.2BSD /." Online version of thesis, 1987. http://hdl.handle.net/1850/10295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sobel, Ann E. Kelley. "Modular verification of concurrent systems /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487267546983528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Philippou, Anna. "Reasoning about systems with evolving structure." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/93676/.

Full text
Abstract:
This thesis is concerned with the specification and verification of mobile systems, i.e. systems with dynamically-evolving communication topologies. The expressiveness and applicability of the πυ-calculus, an extension of the π-calculus with first-order data, is investigated for describing and reasoning about mobile systems. The theory of confluence and determinacy in the πυ-calculus is studied, with emphasis on results and techniques which facilitate process verification. The utility of the calculus for giving descriptions which are precise, natural and amenable to rigorous analysis is illustrated in three applications. First, the behaviour of a distributed protocol is analysed. The use of a mobile calculus makes it possible to capture important intuitions concerning the behaviour of the algorithm; the theory of confluence plays a central role in its correctness proof. Secondly, an analysis of concurrent operations on a dynamic search structure, the B-tree, is carried out. This exploits results obtained concerning a notion of partial confluence by whose use classes of systems in which interaction between components is of a certain disciplined kind may be analysed. Finally, the πυ-calculus is used to give a semantic definition for a concurrent-object programming language and it is shown how this definition can be used as a basis for reasoning about systems prescribed by programs. Syntactic conditions on programs are isolated and shown to guarantee determinacy. Transformation rules which increase the scope for concurrent activity within programs without changing their observable behaviour are given and their soundness proved.
APA, Harvard, Vancouver, ISO, and other styles
33

Finney, James. "Autocoding methods for networked embedded systems." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/36892/.

Full text
Abstract:
The volume and complexity of software is increasing; presenting developers with an ever increasing challenge to deliver a system within the agreed timescale and budget [1]. With the use of Computer-Aided Software Engineering (CASE) tools for requirements management, component design, and software validation the risks to the project can be reduced. This project focuses on Autocoding CASE tools, the methods used by such tools to generate the code, and the features these tools provide the user. The Extensible Stylesheet Language Transformation (XSLT) based autocoding method used by Rapicore in their NetGen embedded network design tool was known to have a number of issues and limitations. The aim of the research was to identify these issues and develop an innovative solution that would support current and future autocoding requirements. Using the literature review and a number of practical projects, the issues with the XSLT-based method were identified. These issues were used to define the requirements with which a more appropriate autocoding method was researched and developed. A more powerful language was researched and selected, and with this language a prototype autocoding platform was designed, developed, validated, and evaluated. The work concludes that the innovative use and integration of programmer-level Extensible Markup Language (XML) code descriptions and PHP scripting has provided Rapicore with a powerful and flexible autocoding platform to support current and future autocoding application requirements of any size and complexity.
APA, Harvard, Vancouver, ISO, and other styles
34

Johansson, Oscar, and Max Forsman. "Shared computer systems and groupware development : Escaping the personal computer paradigm." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-75953.

Full text
Abstract:
For the majority of the computers existence, we humans have interacted with them in a similar way, usually with a strict one-to-one relationship between user and machine. This is reflected by the design of most computers, operating systems and user applications on the market today, which are typically intended to only be operated by a single user. When computers are used for teamwork and cooperation, this design philosophy can be restricting and problematic. This paper investigates the development of shared software intended for multiple users and the impact of the single user bias in this context. A prototype software system was developed in order to evaluate different development methods for shared applications and discover potential challenges and limitations with this kind of software. It was found that the development of applications for multiple users can be severely limited by the target operating system and hardware platform. The authors conclude that new platforms are required to develop shared software more efficiently. These platforms should be tailored to provide robust support for multiple concurrent users. This work was carried out together with SAAB Air Traffic Management in Växjö, Sweden and is a bachelor's thesis in computer engineering at Linnaeus University.
APA, Harvard, Vancouver, ISO, and other styles
35

Purdin, Titus Douglas Mahlon. "ENHANCING FILE AVAILABILITY IN DISTRIBUTED SYSTEMS (THE SAGUARO FILE SYSTEM)." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184161.

Full text
Abstract:
This dissertation describes the design and implementation of the file system component of the Saguaro operating system for computers connected by a local-area network. Systems constructed on such an architecture have the potential advantage of increased file availability due to their inherent redundancy. In Saguaro, this advantage is made available through two mechanisms that support semi-automatic file replication and access: reproduction sets and metafiles. A reproduction set is a collection of files that the system attempts to keep identical on a "best effort" basis, relying on the user to handle unusual situations that may arise. A metafile is a special file that contains symbolic path names of other files; when a metafile is opened, the system selects an available constituent file and opens it instead. These mechanisms are especially appropriate for situations that do not require guaranteed consistency or a large number of copies. Other interesting aspects of the Saguaro file system design are also described. The logical file system forms a single tree, yet any file can be placed in any of the physical file systems. This organization allows the creation of a logical association among files that is quite different from their physical association. In addition, the broken path algorithm is described. This algorithm makes it possible to bypass elements in a path name that are on inaccessible physical file systems. Thus, any accessible file can be made available, regardless of the availability of directories in its path. Details are provided on the implementation of the Saguaro file system. The servers of which the system is composed are described individually and a comprehensive operational example is supplied to illustrate their interation. The underlying data structures of the file system are presented. The virtual roots, which contain information used by the broken path algorithm, are the most novel of these. Finally, an implementation of reproduction sets and metafiles for interconnected networks running Berkeley UNIX is described. This implementation demonstrates the broad applicability of these mechanisms. It also provides insight into the way in which mechanisms to facilitate user controlled replication of files can be inexpensively added to existing file systems. Performance measurements for this implementation are also presented.
APA, Harvard, Vancouver, ISO, and other styles
36

Lurain, Sher. "Networking security : risk assessment of information systems /." Online version of thesis, 1990. http://hdl.handle.net/1850/10587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Pinnix, Justin Everett. "Operating System Kernel for All Real Time Systems." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010310-181302.

Full text
Abstract:

PINNIX, JUSTIN EVERETT. Operating System Kernel for All Real Time Systems.(Under the direction of Robert J. Fornaro and Vicki E. Jones.)

This document describes the requirements, design, and implementation of OSKAR, ahard real time operating system for Intel Pentium compatible personal computers.OSKAR provides rate monotonic scheduling, fixed and dynamic priority scheduling,semaphores, message passing, priority ceiling protocols, TCP/IP networking, and globaltime synchronization using the Global Positioning System (GPS). It is intended toprovide researchers a test bed for real time projects that is inexpensive, simple tounderstand, and easy to extend.

The design of the system is described with special emphasis on design tradeoffs made toimprove real time requirements compliance. The implementation is covered in detail atthe source code level. Experiments to qualify functionality and obtain performanceprofiles are included and the results explained.

APA, Harvard, Vancouver, ISO, and other styles
38

Lever, K. E. "Identifying and mitigating security risks in multi-level systems-of-systems environments." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/8707/.

Full text
Abstract:
In recent years, organisations, governments, and cities have taken advantage of the many benefits and automated processes Information and Communication Technology (ICT) offers, evolving their existing systems and infrastructures into highly connected and complex Systems-of-Systems (SoS). These infrastructures endeavour to increase robustness and offer some resilience against single points of failure. The Internet, Wireless Sensor Networks, the Internet of Things, critical infrastructures, the human body, etc., can all be broadly categorised as SoS, as they encompass a wide range of differing systems that collaborate to fulfil objectives that the distinct systems could not fulfil on their own. ICT constructed SoS face the same dangers, limitations, and challenges as those of traditional cyber based networks, and while monitoring the security of small networks can be difficult, the dynamic nature, size, and complexity of SoS makes securing these infrastructures more taxing. Solutions that attempt to identify risks, vulnerabilities, and model the topologies of SoS have failed to evolve at the same pace as SoS adoption. This has resulted in attacks against these infrastructures gaining prevalence, as unidentified vulnerabilities and exploits provide unguarded opportunities for attackers to exploit. In addition, the new collaborative relations introduce new cyber interdependencies, unforeseen cascading failures, and increase complexity. This thesis presents an innovative approach to identifying, mitigating risks, and securing SoS environments. Our security framework incorporates a number of novel techniques, which allows us to calculate the security level of the entire SoS infrastructure using vulnerability analysis, node property aspects, topology data, and other factors, and to improve and mitigate risks without adding additional resources into the SoS infrastructure. Other risk factors we examine include risks associated with different properties, and the likelihood of violating access control requirements. Extending the principals of the framework, we also apply the approach to multi-level SoS, in order to improve both SoS security and the overall robustness of the network. In addition, the identified risks, vulnerabilities, and interdependent links are modelled by extending network modelling and attack graph generation methods. The proposed SeCurity Risk Analysis and Mitigation Framework and principal techniques have been researched, developed, implemented, and then evaluated via numerous experiments and case studies. The subsequent results accomplished ascertain that the framework can successfully observe SoS and produce an accurate security level for the entire SoS in all instances, visualising identified vulnerabilities, interdependencies, high risk nodes, data access violations, and security grades in a series of reports and undirected graphs. The framework’s evolutionary approach to mitigating risks and the robustness function which can determine the appropriateness of the SoS, revealed promising results, with the framework and principal techniques identifying SoS topologies, and quantifying their associated security levels. Distinguishing SoS that are either optimally structured (in terms of communication security), or cannot be evolved as the applied processes would negatively impede the security and robustness of the SoS. Likewise, the framework is capable via evolvement methods of identifying SoS communication configurations that improve communication security and assure data as it traverses across an unsecure and unencrypted SoS. Reporting enhanced SoS configurations that mitigate risks in a series of undirected graphs and reports that visualise and detail the SoS topology and its vulnerabilities. These reported candidates and optimal solutions improve the security and SoS robustness, and will support the maintenance of acceptable and tolerable low centrality factors, should these recommended configurations be applied to the evaluated SoS infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
39

Thomas, Sam Lloyd. "Backdoor detection systems for embedded devices." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8365/.

Full text
Abstract:
A system is said to contain a backdoor when it intentionally includes a means to trigger the execution of functionality that serves to subvert its expected security. Unfortunately, such constructs are pervasive in software and systems today, particularly in the firmware of commodity embedded systems and “Internet of Things” devices. The work presented in this thesis concerns itself with the problem of detecting backdoor-like constructs, specifically those present in embedded device firmware, which, as we show, presents additional challenges in devising detection methodologies. The term “backdoor”, while used throughout the academic literature, by industry, and in the media, lacks a rigorous definition, which exacerbates the challenges in their detection. To this end, we present such a definition, as well as a framework, which serves as a basis for their discovery, devising new detection techniques and evaluating the current state-of-the-art. Further, we present two backdoor detection methodologies, as well as corresponding tools which implement those approaches. Both of these methods serve to automate many of the currently manual aspects of backdoor identification and discovery. And, in both cases, we demonstrate that our approaches are capable of analysing device firmware at scale and can be used to discover previously undocumented real-world backdoors.
APA, Harvard, Vancouver, ISO, and other styles
40

Forbes, Harold C. "Operating system principles and constructs for dynamic multi-processor real-time control systems." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ardeshir-Larijani, Ebrahim. "Automated equivalence checking of quantum information systems." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/63940/.

Full text
Abstract:
Quantum technologies have progressed beyond the laboratory setting and are beginning to make an impact on industrial development. The construction of practical, general purpose quantum computers has been challenging, to say the least. But quantum cryptographic and communication devices have been available in the commercial marketplace for a few years. Quantum networks have been built in various cities around the world, and plans are afoot to launch a dedicated satellite for quantum communication. Such new technologies demand rigorous analysis and verification before they can be trusted in safety and security-critical applications. In this thesis we investigate the theory and practice of equivalence checking of quantum information systems. We present a tool, Quantum Equivalence Checker (QEC), which uses a concurrent language for describing quantum systems, and performs verification by checking equivalence between specification and implementation. For our process algebraic language CCSq, we define an operational semantics and a superoperator semantics. While in general, simulation of quantum systems using current computing technology is infeasible, we restrict ourselves to the stabilizer formalism, in which there are efficient simulation algorithms and representation of quantum states. By using the stabilizer representation of quantum states we introduce various algorithms for testing equality of stabilizer states. In this thesis, we consider concurrent quantum protocols that behave functionally in the sense of computing a deterministic input-output relation for all interleavings of a concurrent system. Crucially, these input-output relations can be abstracted by superoperators, enabling us to take advantage of linearity. This allows us to analyse the behaviour of protocols with arbitrary input, by simulating their operation on a finite basis set consisting of stabilizer states. We present algorithms for the checking of functionality and equivalence of quantum protocols. Despite the limitations of the stabilizer formalism and also the range of protocols that can be analysed using equivalence checking, QEC is applied to specify and verify a variety of interesting and practical quantum protocols from quantum communication and quantum cryptography to quantum error correction and quantum fault tolerant computation, where for each protocol different sequential and concurrent model are defined in CCSq. We also explain the implementation details of the QEC tool and report on the experimental results produced by using it on the verification of a number of case studies.
APA, Harvard, Vancouver, ISO, and other styles
42

Umeh, Njideka Adaku. "Security architecture methodology for large net-centric systems." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/Umeh_09007dcc8049b3f0.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed December 6, 2007) Includes bibliographical references (p. 60-63).
APA, Harvard, Vancouver, ISO, and other styles
43

Jiang, Junyi. "Optical wireless communication systems." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/387239/.

Full text
Abstract:
In recent years, Optical Wireless (OW) communication techniques have attracted substantial attention as a benefit of their abundant spectral resources in the optical domain, which is a potential solution for satisfying the ever-increasing demand for increased wireless capacity in the conventional Radio Frequency (RF) band. Motivated by the emerging techniques and applications of OW communication, the Institute of Electrical and Electronics Engineers (IEEE) had released the IEEE standard 802.15.7 for short-range optical wireless communications, which categorised the Physical layer (PHY) of the OW communication into three candidate-solutions according to their advantages in different applications and environments: 1) Physical-layer I (PHY I): Free Space Optical (FSO)communication employs high-intensity Light Emitting Diodes (LEDs) or Laser Diodes (LDs) as its transmitter. 2) Physical-layer II (PHY II) uses cost-effective, low-power directional white LEDs for the dual function of illumination and communication. 3) Physical III (PHY-III) relies on the so-called Colour-Shift Keying (CSK) modulation scheme for supporting high-rate communication. Our investigations can be classified into three major categories, namely Optical Orthogonal Frequency Division Multiplexing (OFDM) based Multiple-Input Multiple-Output (MIMO) techniques for FSO communications in the context of PHY I, video streaming in PHY-II and the analysis and design of CSK for PHY-III. To be more explicit, in Chapter 2 we first construct a novel ACO-OFDM based MIMO system and investigate its performance under various FSO turbulence channel conditions. However, MIMO systems require multiple optical chains, hence their power consumption and hardware costs become substantial. Hence, we introduced the concept of Aperture Selection (ApS) to mitigate these problems with the aid of a simple yet efficient ApS algorithm for assisting our ACO-OFDM based MIMO system. Since the channel conditions of indoor Visible Light Communication (VLC) environments are more benign than the FSO-channels of Chapter 2, directional white LEDs are used to create an “attocell” in Chapter 3. More specifically, we investigate video streaming in a multi-Mobile Terminals (MTs) indoor VLC system relying on Unity Frequency Reuse (UFR) as well as on Higher Frequency Reuse Factor based Transmission (HFRFT) and on Vectored Transmission (VT) schemes. We minimise the distortion of video streaming, while satisfying the rate constraints as well as optical constraints of all the MTs. In Chapter 4 we analyse the performance of CSK relying both on joint Maximum Likelihood (ML) Hard-Detection (HD), as well as on the the Maximum A posteriori (MAP) criterion-based Soft-Detection (SD) of CSK. Finally, we conceive both two- stage and three-stage concatenated iterative receivers capable of achieving a substantial iteration gain, leading to a vanishingly low BER.
APA, Harvard, Vancouver, ISO, and other styles
44

Mayo, Maldonado Jonathan. "Switched linear differential systems." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/383678/.

Full text
Abstract:
In this thesis we study systems with switching dynamics and we propose new mathematical tools to analyse them. We show that the postulation of a global state space structure in current frameworks is restrictive and lead to potential difficulties that limit its use for the analysis of new emerging applications. In order to overcome such shortcomings, we reformulate the foundations in the study of switched systems by developing a trajectory-based approach, where we allow the use of models that are most suitable for the analysis of a each system. These models can involve sets of higher-order differential equations whose state space does not necessarily coincide. Based on this new approach, we first study closed switched systems, and we provide sufficient conditions for stability based on LMIs using the concept of multiple higher order Lyapunov function. We also study the role of positive-realness in stability of bimodal systems and we introduce the concept of positive-real completion. Furthermore, we study open switched systems by developing a dissipativity theory. We give necessary and sufficient conditions for dissipativity in terms of LMIs constructed from the coefficient matrices of the differential equations describing the modes. The relationship between dissipativity and stability is also discussed. Finally, we study the dynamics of energy distribution networks. We develop parsimonious models that deal effectively with the variant complexity of the network and the inherent switching phenomena induced by power converters. We also present the solution to instability problems caused by devices with negative impedance characteristics such as constant power loads, using tools developed in our framework.
APA, Harvard, Vancouver, ISO, and other styles
45

Nofal, Samer. "Algorithms for argument systems." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12173/.

Full text
Abstract:
Argument systems are computational models that enable an artificial intelligent agent to reason via argumentation. Basically, the computations in argument systems can be viewed as search problems. In general, for a wide range of such problems existing algorithms lack five important features. Firstly, there is no comprehensive study that shows which algorithm among existing others is the most efficient in solving a particular problem. Secondly, there is no work that establishes the use of cost-effective heuristics leading to more efficient algorithms. Thirdly, mechanisms for pruning the search space are understudied, and hence, further pruning techniques might be neglected. Fourthly, diverse decision problems, for extended models of argument systems, are left without dedicated algorithms fine-tuned to the specific requirements of the respective extended model. Fifthly, some existing algorithms are presented in a high level that leaves some aspects of the computations unspecified, and therefore, implementations are rendered open to different interpretations. The work presented in this thesis tries to address all these concerns. Concisely, the presented work is centered around a widely studied view of what computationally defines an argument system. According to this view, an argument system is a pair: a set of abstract arguments and a binary relation that captures the conflicting arguments. Then, to resolve an instance of argument systems the acceptable arguments must be decided according to a set of criteria that collectively define the argumentation semantics. For different motivations there are various argumentation semantics. Equally, several proposals in the literature present extended models that stretch the basic two components of an argument system usually by incorporating more elements and/or broadening the nature of the existing components. This work designs algorithms that solve decision problems in the basic form of argument systems as well as in some other extended models. Likewise, new algorithms are developed that deal with different argumentation semantics. We evaluate our algorithms against existing algorithms experimentally where sufficient indications highlight that the new algorithms are superior with respect to their running time.
APA, Harvard, Vancouver, ISO, and other styles
46

Rogers, David T. "A framework for dynamic subversion." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FRogers.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2003.
Thesis advisor(s): Cynthia E. Irvine, Roger R. Schell. Includes bibliographical references (p. 105-107). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
47

Van, Riet F. A. "LF : a language for reliable embedded systems." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52322.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: Computer-aided verification techniques, such as model checking, are often considered essential to produce highly reliable software systems. Modern model checkers generally require models to be written in eSP-like notations. Unfortunately, such systems are usually implemented using conventional imperative programming languages. Translating the one paradigm into the other is a difficult and error prone process. If one were to program in a process-oriented language from the outset, the chasm between implementation and model could be bridged more readily. This would lead to more accurate models and ultimately more reliable software. This thesis covers the definition of a process-oriented language targeted specifically towards embedded systems and the implementation of a suitable compiler and run-time system. The language, LF, is for the most part an extension of the language Joyce, which was defined by Brinch Hansen. Both LF and Joyce have features which I believe make them easier to use than other esp based languages such as occam. An example of this is a selective communication primitive which allows for both input and output guards which is not supported in occam. The efficiency of the implementation is important. The language was therefore designed to be expressive, but constructs which are expensive to implement were avoided. Security, however, was the overriding consideration in the design of the language and runtime system. The compiler produces native code. Most other esp derived languages are either interpreted or execute as tasks on host operating systems. Arguably this is because most implementations of esp and derivations thereof are for academic purposes only. LF is intended to be an implementation language. The performance of the implementation is evaluated in terms of practical metries such as the time needed to complete communication operations and the average time needed to service an interrupt.
AFRIKAANSE OPSOMMING: Rekenaar ondersteunde verifikasietegnieke soos programmodellering, is onontbeerlik in die ontwikkeling van hoogs betroubare programmatuur. In die algemeen, aanvaar programme wat modelle toets eSP-agtige notasie as toevoer. Die meeste programme word egter in meer konvensionele imperatiewe programmeertale ontwikkel. Die vertaling vanuit die een paradigma na die ander is 'n moelike proses, wat baie ruimte laat vir foute. Indien daar uit die staanspoor in 'n proses gebaseerde taal geprogrammeer word, sou die verwydering tussen model en program makliker oorbrug kon word. Dit lei tot akkurater modelle en uiteindelik tot betroubaarder programmatuur. Die tesis ondersoek die definisie van 'n proses gebaseerde taal, wat gemik is op ingebedde programmatuur. Verder word die implementasie van 'n toepaslike vertaler en looptyd omgewing ook bespreek. Die taal, LF, is grotendeels gebaseer op Joyce, wat deur Brinch Hansen ontwikkel is. Joyce en op sy beurt LF, is verbeterings op ander esp verwante tale soos occam. 'n Voorbeeld hiervan is 'n selektiewe kommunikasieprimitief wat die gebruik van beide toevoer- en afvoerwagte ondersteun. Omdat 'n effektiewe implementasie nagestreef word, is die taalontwerp om so nadruklik moontlik te wees, sonder om strukture in te sluit wat oneffektief is om te implementeer. Sekuriteit was egter die oorheersende oorweging in die ontwerp van die taal en looptyd omgewing. Die vertaler lewer masjienkode, terwyl die meeste ander implementasies van eSP-agtige tale geinterpreteer word of ondersteun word as prosesse op 'n geskikte bedryfstelsel- die meeste eSP-agtige tale word slegs vir akademiese doeleindes aangewend. LF is by uitstek ontwerp as implementasie taal. Die evaluasie van die stelsel se werkverrigting is gedoen aan die hand van praktiese maatstawwe soos die tyd wat benodig word vir kommunikasie, sowel as die gemiddelde tyd benodig vir die hantering van onderbrekings.
APA, Harvard, Vancouver, ISO, and other styles
48

Miles, R. K. "The Soft Systems Methodology : A practicable framework for computer systems analysis." Thesis, Lancaster University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Frame, Charles E. "Personal computer and workstation operating systems tutorial." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA280132.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, March 1994.
Thesis advisor(s): Norman F. Schneidewind. "March 1994." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
50

Goh, Han Chong. "Intrusion deception in defense of computer systems." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FGoh.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography