Dissertations / Theses on the topic 'Critical applications'

To see the other types of publications on this topic, follow the link: Critical applications.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Critical applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Weihang. "Architectural Reasoning for Safety Critical Software Applications." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485150.

Full text
Abstract:
In recent years, there has been substantial move towards architecture-based development for safety-critical software applications. Nevertheless, techniques for architectural design have not been developed to the extent necessary to guarantee the safety of these systems. In particular, current practices often focus upon architectural modelling without making the underlying design deliberation explicit. Although a number of protection mechanisms have been codified in both research and practice, there is little practical guidance mi how to exploit them to address application-specific safety concerns. This thesis defines and demonstrates a coherent and effective approach to eliciting and formulating safety concerns, evaluating and mitigating safety concerns, and arguing about safety within the evolutionary architecting process. The elicitation and formulation of safety concerns are based upon the unification of goals and scenarios from both positive .and negative perspectives. The evaluation of safety concerns is based upon the application of Object-Oriented Bayesian Belief Networks and mitigation space is informed by the evaluation results and the formulation of safety concerns. Safety argumentation of architectures is based upon the definition of a set of patterns of argument. Guidance on safety argument review is also provided. Through addressing safety concerns in the early system development lifecyc1e, there can be an increased level of design confidence in the architectures developed for safety-critical software applications. Evaluation of the approach is conducted through a number of academic and industrial case studies.
APA, Harvard, Vancouver, ISO, and other styles
2

Haylock, James Alexander. "Fault tolerant drives for safety critical applications." Thesis, University of Newcastle Upon Tyne, 1998. http://hdl.handle.net/10443/352.

Full text
Abstract:
The correct operation of adjustable speed drives, which form part of a larger system, is often essential to the operation of the system as a whole. In certain applications the failure of such a drive could result in a threat to human safety and these applications are termed 'safety critical'. The chance of a component failure resulting in non-operation of the drive can be dramatically reduced by adopting a fault tolerant design. A fault tolerant drive must continue to operate throughout the occurrence of any single point failure without undue disturbance to the power output. Thereafter the drive must be capable of producing rated output indefinitely in the presence of the fault. The work presented in this thesis shows that fault tolerance can be achieved without severe penalties in terms of cost or power to mass ratio. The design of a novel permanent magnet drive is presented and a 'proof of concept' demonstrator has been built, based on a 20 kW, 13000 RPM aircraft fuel pump specffication. A novel current controller with near optimal transient performance is developed to enable precise shaping of the phase currents at high shaft speeds. The best operating regime for the machine is investigated to optimise the power to mass ratio of the drive. A list of the most likely electrical faults is considered. Some faults result in large fault currents and require rapid detection to prevent fault propagation. Several novel fault sensors are discussed. Fault detection and identification schemes are developed, including new schemes for rapid detection of turn to turn faults and power device short circuit faults. Post fault control schemes are described which enable the drive to continue to operate indefinitely in the presence of each fault. Finally, results show the initially healthy drive operating up to, through and beyond the introduction of each of the most serious faults.
APA, Harvard, Vancouver, ISO, and other styles
3

Kurd, Zeshan. "Artificial neural networks in safety-critical applications." Thesis, University of York, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Venugopalan, Vigneshwaran. "Supervisory wireless control for critical industrial applications." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/8501/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rohrmair, Gordon Thomas. "Use of CSP to verify security-critical applications." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schwager, Mac. "Towards verifiable adaptive control for safety critical applications." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32344.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
Includes bibliographical references (p. 97-101).
To be implementable in safety critical applications, adaptive controllers must be shown to behave strictly according to predetermined specifications. This thesis presents two tools for verifying specifications relevant to practical direct-adaptive control systems. The first tool is derived from an asymptotic analysis of the error dynamics of a direct adaptive controller and uncertain linear plant. The analysis yields a so called Reduced Linear Asymptotic System, which can be used for designing adaptive systems to meet transient specifications. The tool is demonstrated in two design examples from flight mechanics, and verified in numerical simulation. The second tool developed is an algorithm for direct-adaptive control of plants with magnitude saturation constraints on multiple inputs. The algorithm is a non-trivial extension of an existing technique for single input systems with saturation. Boundeness of all signals is proved for initial conditions in a compact region. In addition, the notion of a class of multi-dimensional saturation functions is introduced. The saturation compensation technique is demonstrated in numerical simulation. Finally, these tools are applied to design a direct-adaptive controller for a realistic multi-input aircraft model to accomplish control reconfiguration in the case of unforeseen failure, damage, or disturbances. A novel control design for incorporating control allocation and reconfiguration is introduced. The adaptive system is shown in numerical simulation to have favorable transient qualities and to give a stable response with input saturation constraints.
by Mac Schwager.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Furht, Borko, David Joseph, David Gluch, and John Parker. "OPEN SYSTEMS FOR TIME-CRITICAL APPLICATIONS IN TELEMETRY." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613465.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In this paper we discuss the next generation of open real-time systems for time critical applications in telemetry. Traditionally, real-time computing has been a realm of proprietary systems, with real-time applications written in assembly language. With the escalating cost of software development and the need for porting real-time applications to state-of-the-art hardware without massive conversion efforts, there is a need for real-time applications to be portable so that they can be moved to newer hardware platforms easily. Therefore, the next generation of real-time systems will be based on open systems incorporating industry standards, which will reduce system cost and time to market, increase availability of software packages, increase ease-of-use, and facilitate system integration. The open real-time system strategy, presented in this paper, is based on hardware architectures using off-the-shelf microprocessors, Motorola 680X0 and 88X00 families, and the REAL/IX operating system, a fully preemptive real-time UNIX operating system, developed by MODCOMP.
APA, Harvard, Vancouver, ISO, and other styles
8

Bion, Julian Fleetwoo. "Severity scoring and its applications in critical illness." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/47775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Xiaolin. "Wireless Communication Networks for Time-critical Industrial Applications." Licentiate thesis, KTH, Nätverk och systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239019.

Full text
Abstract:
Wireless communication is of paramount importance to enable the vision of Industry 4.0. Compared to mobile communications, industrial communications pose demanding requirements in terms of ultra low latency and high reliability. Currently, for the most time-critical industrial applications, there is no available wireless solutions satisfying these latency requirements. This thesis studies effective techniques to reduce the latency for the time-critical industrial applications, especially from the Physical Layer (PHY) point of view. The thesis is organized in two main parts. In the first part, the available methods for low latency are surveyed and analyzed in terms of end-to-end latency. It is argued that the enabling techniques should be optimized together to reduce the end-to-end latency while satisfying other requirements such as reliability and throughput. Moreover, the realistic timing constraints of different PHY algorithms, hardware, and mechanisms are derived based on the state-of-art wireless implementations. In the second part, a revision of PHY with an optimized PHY structure is proposed to reduce the latency. It is shown that a PHY with just a short one-symbol preamble and dedicated packet detection and synchronization algorithms for short packets is robust to carrier frequency offsets and false alarms by both theoretical and site experiments. The investigations of this thesis show that revising the PHY structure/parameters is effective to reduce the packet transmission time, and further improve the latency performance of wireless communication network for time-critical industrial applications. In the future, we include the PHY results of this thesis in the investigation of the Medium Access Control (MAC), for industrial wireless communications with very low latencies.

QC 20181116

APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Yu. "Probabilistic verification of satellite systems for mission critical applications." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7586/.

Full text
Abstract:
In this thesis, we present a quantitative approach using probabilistic verification techniques for the analysis of reliability, availability, maintainability, and safety (RAMS) properties of satellite systems. The subject of our research is satellites used in mission critical industrial applications. A strong case for using probabilistic model checking to support RAMS analysis of satellite systems is made by our verification results. This study is intended to build a foundation to help reliability engineers with a basic background in model checking to apply probabilistic model checking to small satellite systems. We make two major contributions. One of these is the approach of RAMS analysis to satellite systems. In the past, RAMS analysis has been extensively applied to the field of electrical and electronics engineering. It allows system designers and reliability engineers to predict the likelihood of failures from the indication of historical or current operational data. There is a high potential for the application of RAMS analysis in the field of space science and engineering. However, there is a lack of standardisation and suitable procedures for the correct study of RAMS characteristics for satellite systems. This thesis considers the promising application of RAMS analysis to the case of satellite design, use, and maintenance, focusing on its system segments. Data collection and verification procedures are discussed, and a number of considerations are also presented on how to predict the probability of failure. Our second contribution is leveraging the power of probabilistic model checking to analyse satellite systems. We present techniques for analysing satellite systems that differ from the more common quantitative approaches based on traditional simulation and testing. These techniques have not been applied in this context before. We present the use of probabilistic techniques via a suite of detailed examples, together with their analysis. Our presentation is done in an incremental manner: in terms of complexity of application domains and system models, and a detailed PRISM model of each scenario. We also provide results from practical work together with a discussion about future improvements.
APA, Harvard, Vancouver, ISO, and other styles
11

Nicholson, Richard. "Radiation sensor interface ASSP designed for safety critical applications." Thesis, Lancaster University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497766.

Full text
Abstract:
The aim of this research is to advance the state-of-the-art in instrumentation for safety critical radiation monitoring through the fusion of a novel high reliability smart sensor system methodology with the design of a novel ASSP integrated circuit. The sensor element for the demonstrator system has been designed by the National Radiological Protection Board (NRPB) and the National Physics Laboratory (NPL), in collaboration with the project sponsor, BNFPlc.
APA, Harvard, Vancouver, ISO, and other styles
12

Ben, Tahayekt Ben Tahaikt Chaimaa. "A secure user authentication scheme for critical mobile applications." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-34845.

Full text
Abstract:
Smartphones have facilitated tasks in private and work life for its users. In business, employees often should manage sensitive data that unauthorised people cannot access, so some user authentication is needed to perform. Besides the normal user authentication, some employers give the right to access to the sensitive data only if the employees stay in specific locations. That makes sense for those businesses that have various construction sites and offices that are not necessarily located in the same geographical region. In those companies, the employees must be able to perform their tasks from different locations regardless of the available network infrastructure. To protect the data from intruders, this research presents a secure location-based user authentication scheme for mobile application that works offline. This research considers to enable access to the sensitive data using off-the-shelf mobile devices without adding any extra hardware and with no additional information from a fixed infrastructure. This Thesis firstly describes the architecture and attributes of the proposed solution. Then, the techniques used for the design and functionality of the solution are presented. The results of this study reveal that the proposed solution is more suitable for the applications that is used in outdoor locations. Finally, to alleviate the shortcoming of the presented technique for indoor locations, a new method has been discussed and tested. This report is a final Thesis in collaboration with SAAB. The purpose of this research is to examine the best way to protect sensitive data managed by the employees using their smartphones in different workplaces.
APA, Harvard, Vancouver, ISO, and other styles
13

Luckcuck, Matthew. "Safety-Critical Java Level 2 : applications, modelling, and verification." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/17743/.

Full text
Abstract:
Safety-Critical Java (SCJ) introduces a new programming paradigm for applications that must be certified. To aid certification, SCJ is organised into three compliance levels, which increase in complexity from Level 0 to Level 2. The SCJ language specification (JSR 302) is an Open Group Standard, but it does not include verification techniques. Previous work has addressed verification for Level 0 and Level 1 programs. This thesis supports the much more complex SCJ Level 2 programs, which allow for the programming of highly concurrent multi-processor applications with Java threads, and wait and notify mechanisms. The SCJ language specification is clear on what constitutes a Level 2 program but not why it should be used. The utility of Levels 0 and 1 are clear from their features. The scheduling behaviour required by a program is a primary indicator of whether or not Level 0 should be used. However, both Levels 1 and 2 use concurrency and fixed-priority scheduling, so this cannot be used as an indicator to choose between them. This thesis presents the first examination of utility of the unique features of Level 2 and presents use cases that justify the availability of these features. This thesis presents a technique for modelling SCJ Level 2 programs using the state-rich process algebra Circus. The model abstracts away from resources (for example, memory) and scheduling. An SCJ Level 2 program is represented by a combination of a generic model of the SCJ API (the framework model) and an application-specific model (the application model) of that program. The framework model is reused for each modelled program, whereas the application model is generated afresh. This is the first formal semantics of the SCJ Level 2 paradigm and it provides both top-down and bottom-up benefits. Top-down, it is an essential ingredient in the development of refinement-based reasoning techniques for SCJ Level 2 programs. These can be used to develop Level 2 programs that are correct-by-construction. Bottom-up, the technique can be used as a verification tool for Level 2 programs. This is achieved with the Failures Divergences Refinement checker version 3 (FDR3), after translating the model from Circus to the machine readable version of CSP (CSPM). FDR3 allows animation and model checking, which can reveal sources of deadlock, livelock, and divergence. The CSPM version of the model fits the same pattern, with a generic model of the API being combined with an application-specific model of the program. Because the model ignores scheduling, these checks are a worst-case analysis and can give false-negatives.
APA, Harvard, Vancouver, ISO, and other styles
14

Maad, Sara. "Critical point theory with applications to semilinear problems without compactness." Doctoral thesis, Uppsala : Matematiska institutionen, Univ. [distributör], 2002. http://publications.uu.se/theses/91-506-1557-2/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bauer, Thomas. "Thermophotovoltaic applications in the UK : critical aspects of system design." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/82/.

Full text
Abstract:
Almost 50 years of thermophotovoltaic (TPV) research from various sectors has resulted in a variety of potential applications and TPV technology options. In this work the potential of commercial TPV applications is assessed with specific reference to the UK. The assessment considers competing technologies for electricity generation, namely solar photovoltaics, external and internal heat engine generators, electro¬chemical cells and direct heat-to-electricity conversion devices. Electricity generation by TPV conversion from waste heat of industrial high-temperature processes is identified as one of the most suitable TPV applications. This market is examined in more detail using three specific high-temperature processes from the iron and steel and the glass sectors. Results are extrapolated to the entire UK high-temperature industry and include potential energy and CO2 savings. This work gathers knowledge from TPV and other literature sources and evaluates the technological options for the heat source, the radiator and the PV cell for a TPV system. The optical control in terms of the angular, spatial and in particular spectral radiation distributions in cavities is identified as a specific factor for TPV conversion and critical for a system design. The impact of simultaneous radiation suppression above and below the PV cell bandgap on an ultimate efficiency level is examined. This research focuses on fused silica (SiO2) in TPV cavities and examines the aspects of radiation guidance by total internal reflection and spectral control using coupled radiative and conductive heat transfer. Finite volume modelling and experimental work have examined the radiator-glass-air-PV cell arrangement up to a SiO2 thickness of 20 cm. Both show that the efficiency improves for an increased SiO2 thickness. Finally, the novel concept of a TPV cavity consisting of a solid dielectric medium is assessed.
APA, Harvard, Vancouver, ISO, and other styles
16

Ye, Fan. "Justifying the use of COTS components within safety critical applications." Thesis, University of York, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.424582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Martin, Paul J. (Paul Jonathan). "The industrial-pollution-projection system : critical analysis and potential applications." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/68993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tavakoli, Dehkordi Anousheh. "Optimisation and algorithms in wireless networks for mission critical applications." Thesis, University of Greenwich, 2015. http://gala.gre.ac.uk/13827/.

Full text
Abstract:
The focus of this dissertation is to present novel algorithms and techniques in wireless network systems aiming at performance optimisation. This thesis provides contribution to knowledge on the following topics: (a) sum rate maximisation of two interfering users in an Orthogonal Frequency Division Multiple Access (OFDMA)-based cooperative base stations and (b) event-region detection in Wireless Sensor Networks (WSNs). The first area of work makes contribution on problem of maximising the sum rate of two interfering users, while limiting the received interference at each user. An OFDMA-based system operating in downlink is considered. Comparisons between achieved average spectral efficiency of proposed interference power constraint resource allocation scheme as opposed to achieved average spectral efficiency by non-cooperative Time Division Multiple Access (TDMA) method is provided to prove that the proposed cooperative Base Stations (BSs) scheme outperforms non-cooperative TDMA. The second area of work makes contribution on problem of event region and event boundary detection in WSNs. A new method for classifying randomly deployed sensor nodes over an area of interest into distinctive categories is provided. In this work, a network of spatially distributed and wirelessly connected sensor nodes commissioned to detect two different phenomena, occurring in distant parts of an area of interest, is considered. Analysis on correlation between statistical attributes of received signal distribution at each node and the node’s regional position with respect to two events is provided. Simulation results proves that each node can acknowledge its regional position based only on the statistical attributes of its own environmental readings. This is a promising approach because if only the nodes placed in the close by region of each phenomena report back their reading to the Base Station (BS), as opposed to transmitting entire readings from all nodes, the required bandwidth reduces to be proportional to the size of that event-region only.
APA, Harvard, Vancouver, ISO, and other styles
19

Nuthakki, Chandu. "Modeling and analysis of IP storage protocols for time critical applications." Thesis, Wichita State University, 2007. http://hdl.handle.net/10057/1527.

Full text
Abstract:
In the current storage era, Fibre Channel (FC) protocol is used for high performance and reliability, providing different levels of service with 1Gbps to 4Gbps physical interfaces. FC transports Small Computer System Interface (SCSI) data, between storage devices. FC spans from 500 meters to 10 Kilometers [27, 28] using multimode, single mode fibers respectively, limiting the storage to a site or between two sites. In order to overcome this distance limitation and implementation costs, Fibre Channel over Internet Protocol (FCIP) and Internet Fiber Channel Protocol (iFCP) were introduced [30, 31]. FCIP tunnels the fibre channel frames over Internet between two fiber switches, with in the same network. iFCP is an IP Based gateway to gateway protocol, which interconnects different FC-SANs. Again, all these protocols are involved with fiber channels which increase the cost. iSCSI is a new IETF standardized protocol[26], which transports SCSI data over Internet. Advancements in Ethernet technologies from 1Gbps to 10 Gbps make iSCSI deployment to yield better results in terms of performance and can be very cost effective compared to FCIP and iFCP. Some of the widely used storage applications in the industry are archiving and mirroring, these applications are used for backup/recovery process in IT industry. Archiving is a process in which data is written to portable media such as optical disks or magnetic tapes. Mirroring is a process of data replicated on the remote disk. Mirroring can be done in two ways, synchronous and asynchronous. In synchronous mirroring, when ever there is an update with data, it is written to both local and remote disk at the same time. In asynchronous mirroring, data is updated periodically irrespective of the actual update. If iSCSI is used for remote mirroring, end users need to ensure the performance of iSCSI should meet the requirements of the application. Most of the studies proved deficient in considering some or the other aspects. In this research work, the author presents the modeling and analysis of iSCSI between two SAN Islands considering iSCSI level errors, which will enable the IT industry to use this model for their analyze before they actually deploy. Throughout the analysis, the author employs asynchronous mirroring between the SAN Islands. iSCSI level errors need to be considered when the SCSI data is on the Internet, which will seriously effect the performance of the application in real time. The prototype was analyzed using TCP/IP and UDP traffic with both dedicated links and Internet links. When iSCSI is used to interconnect different SAN islands, one should ensure its performance to meet the application requirements. The bandwidth management for time critical applications like synchronous and asynchronous mirroring is essential while analyzing performance on IP networks, for this the key operations of iSCSI like iSCSI read, iSCSI write were modeled. Also, the throughput under realistic traffic conditions varying different parameters like network types, round trip time, bandwidth and distance was modeled. Security is another important attribute which should be taken in to account while IP networks are involved. It is essential to consider security of data while block level data is being transferred between two storage islands using iSCSI. The errors processed by this protocol on the IP network were considered and suitable iSCSI error recovery procedures were analyzed.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
"July 2007."
APA, Harvard, Vancouver, ISO, and other styles
20

Phillips, Shannon L. "Fundamental studies and applications of liquid chromatography at the critical condition /." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu148640254458979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stapleton, Jacob D. "SYNTHESIS OF UPPER CRITICAL SOLUTION TEMPERATURE POLYMER FOR APPLICATIONS IN BIOTECHNOLOGY." Miami University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=miami1501260269518501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Nuthakki, Chandu Pendse Ravindra. "Modeling and analysis of IP storage protocols for time critical applications /." Thesis, A link to full text of this thesis in SOAR, 2007. http://hdl.handle.net/10057/1527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Farag, Hossam. "Enabling Time- and Mission-Critical Applications in Industrial Wireless Sensor Networks." Licentiate thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35572.

Full text
Abstract:
Nowadays, Wireless Sensor Networks (WSNs) have gained importance as aflexible, easier deployment/maintenance and cost-effective alternative to wired networks,e.g., Fieldbus and Wired-HART, in a wide-range of applications. Initially,WSNs were mostly designed for military and environmental monitoringapplications where energy efficiency is the main design goal. The nodes in the network were expected to have a long lifetime with minimum maintenance while providing best-effort data delivery which is acceptable in such scenarios. With recent advances in the industrial domain, WSNs have been subsequently extended to support industrial automation applications such as process automation and control scenarios. However, these emerging applications are characterized by stringent requirements regarding reliability and real-time communications that impose challenges in the design of Industrial Wireless Sensor Networks (IWSNs) to effectively support time- and mission-critical applications. Typically, time- and mission-critical applications support different traffic categories ranging from relaxed requirements, such as monitoring traffic to firm requirements, such as critical safety and emergency traffic. The critical traffic is mostly acyclic in nature and occasionally occurs at unpredictable time instants. Once it is generated, it must be delivered within strict deadlines. Exceeding the delay bound could lead to system instability, economic loss, or even endanger human life in the working area. The situation becomes even more challenging when an emergency event triggers multiple sensor nodes to transmit critical traffic to the controller simultaneously. The unpredictability of the arrival of such a type of traffic introduces difficulties with regard to making a suitable scheduling that guarantees data delivery within deadline bounds. Existing industrial standards and related research work have thus far not presented a satisfactory solution to the issue. Therefore, providing deterministic and timely delivery for critical traffic and its prioritization over regular traffic is a vital research topic. Motivated by the aforementioned challenges, this work aims to enable real-timecommunication for time- and mission-critical applications in IWSNs. In this context, improved Medium Access Control (MAC) protocols are proposed to enablea priority-based channel access that provides a timely delivery for acyclic critical traffic. The proposed framework starts with a stochastic modelling of the network delay performance under a priority-oriented transmission scheme, followed by two MAC approaches. The first approach proposes a random Clear Channel Assessment (CCA) mechanism to improve the transmission efficiency of acyclic control traffic that is generated occasionally as a result of observations of an established tendency, such as closed-loop supervisory traffic. A Discrete-Time Markov Chain (DTMC) model is provided to evaluate the performance of the proposed protocol analytically in terms of the expected delay and throughput. Numerical results show that the proposed random CCA mechanism improves the shared slots approach in WirelessHART in terms of delay and throughput along with better transmission reliability. The second approach introduces a slot-stealing MAC protocol based on a dynamic deadline-aware scheduling to provide deterministic channel access in emergency and event-based situations, where multiple sensor nodes are triggered simultaneously to transmit time-critical data to the controller. The proposed protocol is evaluated mathematically to provide the worst-case delay bound for the time-critical traffic and the numerical results show that the proposed approach outperforms TDMA-based WSNs in terms of delay and channel utilization.
SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
APA, Harvard, Vancouver, ISO, and other styles
24

Evers, Barbara. "The contribution of gender analysis to economic theory and its policy applications." Thesis, University of Manchester, 2010. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:110580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ao, Gang. "Software hot-swapping techniques for upgrading mission-critical applications on the fly." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0023/MQ52384.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhong, Le. "A critical literature review of case-based reasoning and its educational applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ64004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Stewart, David G. "Thermophysical properties of gases and gas mixtures for critical flow nozzle applications." Thesis, University of Strathclyde, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Woźniak, Mariusz. "High engineering critical current density MgB2 wires and joints for MRI applications." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Eleye-Datubo, Adokiye Godwill. "Integrative risk-based assessment modelling of safety-critical marine and offshore applications." Thesis, Liverpool John Moores University, 2005. http://researchonline.ljmu.ac.uk/5806/.

Full text
Abstract:
This research has first reviewed the current status and future aspects of marine and offshore safety assessment. The major problems identified in marine and offshore safety assessment in this research are associated with inappropriate treatment of uncertainty in data and human error issues during the modelling process. Following the identification of the research needs, this thesis has developed several analytical models for the safety assessment of marine and offshore systems/units. Such models can be effectively integrated into a risk-based framework using the marine formal safety assessment and offshore safety case concepts. Bayesian network (BN) and fuzzy logic (FL) approaches applicable to marine and offshore safety assessment have been proposed for systematically and effectively addressing uncertainty due to randomness and vagueness in data respectively. BN test cases for both a ship evacuation process and a collision scenario between the shuttle tanker and Floating, Production, Storage and Offloading unit (FPSO) have been produced within a cause-effect domain in which Bayes' theorem is the focal mechanism of inference processing. The proposed FL model incorporating fuzzy set theory and an evidential reasoning synthesis has been demonstrated on the FPSO-shuttle tanker collision scenario. The FL and BN models have been combined via mass assignment theory into a fuzzy-Bayesian network (FBN) in which the advantages of both are incorporated. This FBN model has then been demonstrated by addressing human error issues in a ship evacuation study using performance-shaping factors. It is concluded that the developed FL, BN and FBN models provide a flexible and transparent way of improving safety knowledge, assessments and practices in the marine and offshore applications. The outcomes have the potential to facilitate the decision-making process in a risk-based framework. Finally, the results of the research are summarised and areas where further research is required to improve the developed methodologies are outlined.
APA, Harvard, Vancouver, ISO, and other styles
30

Reedman, Adam Victor Creyke. "The design and control of a manipulator for safety-critical deployment applications." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/33736.

Full text
Abstract:
Development of manipulators that interact closely with humans has been a focus of research in fields such as robot-assisted surgery The recent introduction of powered surgical-assistant devices into the operating theatre has meant that modified industrial robot manipulators have been required to interact with both patient and surgeon Some of these robots require the surgeon to grasp the end-effector and apply a force while the joint actuators provide resistance to motion In the operating theatre, the use of high-powered mechanisms to perform these tasks could compromise the safety of the patient, surgeon and operating room staff. In this investigation, a two degrees-of-freedom (2-DoF) manipulator is designed for the purpose of following a pre-defined path under the direct control of the surgeon.
APA, Harvard, Vancouver, ISO, and other styles
31

Schmitt, Felix, Robert Dietrich, and Guido Juckeland. "Scalable critical-path analysis and optimization guidance for hybrid MPI-CUDA applications." Sage, 2017. https://tud.qucosa.de/id/qucosa%3A35554.

Full text
Abstract:
The use of accelerators in heterogeneous systems is an established approach in designing petascale applications. Today, Compute Unified Device Architecture (CUDA) offers a rich programming interface for GPU accelerators but requires developers to incorporate several layers of parallelism on both the CPU and the GPU. From this increasing program complexity emerges the need for sophisticated performance tools. This work contributes by analyzing hybrid MPICUDA programs for properties based on wait states, such as the critical path, a metric proven to identify application bottlenecks effectively. We developed a tool to construct a dependency graph based on an execution trace and the inherent dependencies of the programming models CUDA and Message Passing Interface (MPI). Thereafter, it detects wait states and attributes blame to responsible activities. Together with the property of being on the critical path, we can identify activities that are most viable for optimization. To evaluate the global impact of optimizations to critical activities, we predict the program execution using a graph-based performance projection. The developed approach has been demonstrated with suitable examples to be both scalable and correct. Furthermore, we establish a new categorization of CUDA inefficiency patterns ensuing from the dependencies between CUDA activities.
APA, Harvard, Vancouver, ISO, and other styles
32

Ganesh, Sai Vinayak. "Critical analysis of aging models for lithium-ion second-life battery applications." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587643968721108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Abed, Wathiq. "Robust fault analysis for permanent magnet DC motor in safety critical applications." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3550.

Full text
Abstract:
Robust fault analysis (FA) including the diagnosis of faults and predicting their level of severity is necessary to optimise maintenance and improve reliability of Aircraft. Early diagnosis of faults that might occur in the supervised process renders it possible to perform important preventative actions. The proposed diagnostic models were validated in two experimental tests. The first test concerned a single localised and generalised roller element bearing fault in a permanent magnet brushless DC (PMBLDC) motor. Rolling element bearing defect is one of the main reasons for breakdown in electrical machines. Vibration and current are analysed under stationary and non-stationary load and speed conditions, for a variety of bearing fault severities, and for both local and global bearing faults. The second test examined the case of an unbalance rotor due to blade faults in a thruster, motor based on a permanent magnet brushed DC (PMBDC) motor. A variety of blade fault conditions were investigated, over a wide range of rotation speeds. The test used both discrete wavelet transform (DWT) to extract the useful features, and then feature reduction techniques to avoid redundant features. This reduces computation requirements and the time taken for classification by the application of an orthogonal fuzzy neighbourhood discriminant analysis (OFNDA) approach. The real time monitoring of motor operating conditions is an advanced technique that presents the real performance of the motor, so that the dynamic recurrent neural network (DRNN) proposed predicts the conditions of components and classifies the different faults under different operating conditions. The results obtained from real time simulation demonstrate the effectiveness and reliability of the proposed methodology in accurately classifying faults and predicting levels of fault severity.
APA, Harvard, Vancouver, ISO, and other styles
34

Ao, Gang Carleton University Dissertation Engineering Systems and Computer. "Software hot-swapping techniques for upgrading mission critical applications on the fly." Ottawa, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lyamin, Nikita. "Performance evaluation of safety critical ITS-G5 V2V communications for cooperative driving Applications." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/665484.

Full text
Abstract:
Intelligent Transport Systems (ITS) are aiming to provide innovative services related to different modes of transport and traffic management, and enable various users to be better informed and make safer, more coordinated and smarter use of transport networks. Cooperative-ITS (C-ITS) support connectivity between vehicles, vehicles and roadside infrastructure, traffic signals as well as with other road users. In order to enable vehicular communications European Telecommunication Standards Institute (ETSI) delivered ITS-G5 – a of set of C-ITS standards. Considering the goals of C-ITS, inter-vehicle communications should be reliable and efficient. The subject of this thesis is evaluation of the performance, efficiency, and depend- ability of ITS-G5 communications for cooperative driving applications support. This thesis includes eight scientific papers and extends the research area in three directions: evaluation of the performance of ITS-G5 beaconing protocols; studying the performance of ITS-G5 congestion control mechanisms; and studying the radio jamming Denial-of- Service (DoS) attacks and their detection methods. First, an overview of currently available and ongoing standardization targeting communications in C-ACC/platooning cooperative driving application is provided. Then, as part of the first research direction, we demonstrate via number of studies, that adaptive beaconing approach where message generation is coupled to the speed variation of the originating ITS-s may lead to a similar message synchronization effect in the time domain when vehicles follow mobility scenarios that involve cooperative speed variation. We ex- plain in detail the cause of this phenomenon and test it for a wide range of parameters. In relation to the second problem, we, first, study the influence of different available ITS-G5 legitimate setups on the C-ACC/platooning fuel efficiency and demonstrate that proper communication setup may enhance fuel savings. Then we thoroughly study the standardization of the congestion control mechanism for ITS-G5, which will affect the operation of all cooperative driving C-ITS applications as a mandatory component. We study the influence of congestion control on application performance and give recommendations for improvement to make the congestion control to target at optimizing the applications performance metrics. In the scope of the last research direction, we propose two real-time jamming DoS detection methods. The main advantage of our detection techniques is their short learning phase that not exceed a few seconds and low detection delay of a few hundreds of milliseconds. Under some assumptions, the proposed algorithms demonstrates the ability to detect certain types of attacks with high detection probability.
Els Sistemes de Transport Intel·ligents (ITS) tenen com a objectiu proporcionar serveis innovadors relacionats amb diferents modes de transport i gestió del trànsit, i permetre que els usuaris en facin un ús més segur, més coordinat i més intel·ligent. Cooperative-ITS (C-ITS) fa possible la connectivitat entre vehicles, entre vehicles i la infraestructura de la carretera, entre senyals de trànsit, i amb altres usuaris de la carretera. Per tal de permetre la comunicació entre vehicles, l'Institut Europeu de les Telecomunicacions (ETSI) va crear el ITS-G5 - un conjunt de normes C-ITS. Tenint en compte els objectius de C-ITS, les comunicacions entre vehicles han de ser fiables i eficients.Lobjectiu d'aquesta tesi és l'avaluació del rendiment i l'eficiència de les comunicacions ITS-G5 per donar suport a les aplicacions de conducció cooperativa. La tesi inclou vuit articles científics al voltant de tres àrees de recerca: avaluació del rendiment dels protocols de baliseig ITS-G5; estudi del rendiment dels mecanismes de control de la congestió ITS-G5; i estudi de d’atacs de tipus Denial-of-Service (DoS) i els seus mètodes de detecció. En primer lloc, s’inclou una descripció general dels objectius d'estandardització actuals i futurs respecte a la conducció cooperativa C-ACC / platooning. Després, com a part de la primera àrea de recerca, es demostra a través de diversos estudis, que l'enfocament de balisa adaptativa on la generació de missatges està acoblada a la variació de velocitat dels ITS-s originadors, pot portar a un efecte de sincronització de missatges similar en el domini del temps quan els vehicles adapten de manera cooperativa la seva velocitat. Així, s’explica detalladament la causa d'aquest fenomen i s’estudia per a una àmplia gamma de paràmetres. En relació amb el segon problema, primer s’estudia la influència de diferents configuracions base del ITS-G5 en el consum de combustible, demostrant que amb una configuració adequada es pot millorar l'estalvi de combustible. Després, s’estudia el mecanisme de control de congestió definit per ITS-G5, que afectarà el funcionament de totes les aplicacions de C-ITS de conducció cooperativa ja que es un component obligatori, avaluant la seva influència en el rendiment de les aplicacions, i donant recomanacions de millora. Finalment, en l’àrea de l'última direcció d'investigació, es proposen dos mètodes de detecció de DoS en temps real. El principal avantatge de les tècniques de detecció presentades és la seva curta fase d'aprenentatge, que no excedeix d’uns pocs segons, i el seu baix retard de detecció d'uns pocs centenars de milisegons. Sota alguns supòsits, els algoritmes proposats demostren la capacitat de detectar certs tipus d'atacs amb alta probabilitat de detecció.
Los Sistemas de Transporte Inteligentes (ITS) tienen como objetivo proporcionar servicios innovadores relacionados con diferentes modos de transporte y gestión del tráfico, y permitir que los usuarios hagan un uso más seguro, más coordinado y más inteligente. Cooperative-ITS (C-ITS) hace posible la conectividad entre vehículos, entre vehículos y la infraestructura de la carretera, entre señales de tráfico, y con otros usuarios de la carretera. Para permitir la comunicación entre vehículos, el Instituto Europeo de las Telecomunicaciones (ETSI) creó el ITS-G5 - un conjunto de normas C-ITS. Teniendo en cuenta los objetivos de C-ITS, las comunicaciones entre vehículos deben ser fiables y eficientes. El objetivo de esta tesis es la evaluación del rendimiento y la eficiencia de las comunicaciones ITS-G5 para dar soporte a las aplicaciones de conducción cooperativa. La tesis incluye ocho artículos científicos en torno a tres áreas de investigación: evaluación del rendimiento de los protocolos de baliza ITS-G5; estudio del rendimiento de los mecanismos de control de la congestión ITS-G5; y estudio de de ataques de tipo Denial-of-Service (DoS) y sus métodos de detección. En primer lugar, se incluye una descripción general de los objetivos de estandarización actuales y futuros respecto a la conducción cooperativa C-ACC / platooning. Luego, como parte de la primera área de investigación, se demuestra a través de varios estudios, que el enfoque de baliza adaptativa donde la generación de mensajes está acoplada a la variación de velocidad de los ITS-s originadores, puede llevar a un efecto de sincronización de mensajes similar en el dominio del tiempo cuando los vehículos adaptan de manera cooperativa su velocidad. Así, se explica detalladamente la causa de este fenómeno y se estudia para una amplia gama de parámetros. En relación con el segundo problema, primero se estudia la influencia de diferentes configuraciones base del ITS-G5 en el consumo de combustible, demostrando que con una configuración adecuada se puede mejorar el ahorro de combustible. Después, se estudia el mecanismo de control de congestión definido por ITS-G5, que afectará el funcionamiento de todas las aplicaciones de C-ITS de conducción cooperativa ya que es un componente obligatorio, evaluando su influencia en el rendimiento de las aplicaciones, y dando recomendaciones de mejora. Finalmente, en el área de la última dirección de investigación, se proponen dos métodos de detección de DoS en tiempo real. La principal ventaja de las técnicas de detección presentadas es su corta fase de aprendizaje, que no excede de unos pocos segundos, y su bajo retraso de detección de unos pocos cientos de milisegundos. Bajo algunos supuestos, los algoritmos propuestos demuestran la capacidad de detectar ciertos tipos de ataques con alta probabilidad de detección.
APA, Harvard, Vancouver, ISO, and other styles
36

Heursch, Arnd Christian. "Using standard operating systems for time critical applications with special emphasis on LINUX /." kostenfrei, 2006. http://nbn-resolving.de/urn:nbn:de:bvb:706-1856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Rossi, Giorgia. "Optimised protocols for time-critical applications and internetworking in vehicular ad-hoc networks." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/56212.

Full text
Abstract:
Vehicular ad-hoc networks (VANETs) that enable communication among vehicles and between vehicles and unmanned aerial vehicles (UAVs) and cellular base stations have recently attracted significant interest from the research community, due to the wide range of practical applications they can facilitate (e.g., road safety, traffic management and rescue missions). Despite this increased research activity, the high vehicle mobility in a VANET raises concerns regarding the robustness and adaptiveness of such networks to support time-critical applications and internetworking. In this thesis, as a first step toward the design of efficient MAC protocol to support time-critical applications and internetworking, we show that it is indeed possible to follow the dynamics of a network and consequently adapt the transmission probability of the Aloha protocol to reduce the interference and maximise the single-hop throughput between adjacent nodes. Extensive simulation validates the proposed analytical model, which thus can serve as a promising tool to improve VANETs performance. By exploiting the parallel between the CSMA/CA and Aloha performance models, the optimal transmission probability for the Aloha protocol as a function of estimated vehicular density is derived. This probability is then used to obtain the optimal maximum CW that can be integrated in an amended CSMA/CA protocol to maximise the single-hop throughput among adjacent vehicles. We show by means of simulation that the beneficial impact the proposed protocol is increased channel throughput and reduced transmission delay when compared with the standardised protocol CSMA/CA in IEEE 802.11p. These results reveal the applicability of the new, optimised protocol to safety applications and clustering techniques with stringent performance requirements. Lastly, we propose a Stable Clustering Algorithm for vehicular ad-hoc networks (SCalE) internetworking. The exchange of the necessary status information to support the efficient clusters formation can firmly relay on the support of our optimised CSMA/CA protocol. The SCalE algorithm makes use of the knowledge of the vehicles behaviour (explained in Chapter 5) for efficient selection of CHs, and selects a backup CH on top of the CH to maintain the stability of cluster structures. The increased stability and improved performance of the SCalE algorithm is studied and compared with existing clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Pease, Sarogini G. "A cross-layer middleware architecture for time and safety critical applications in MANETs." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13606.

Full text
Abstract:
Mobile Ad hoc Networks (MANETs) can be deployed instantaneously and adaptively, making them highly suitable to military, medical and disaster-response scenarios. Using real-time applications for provision of instantaneous and dependable communications, media streaming, and device control in these scenarios is a growing research field. Realising timing requirements in packet delivery is essential to safety-critical real-time applications that are both delay- and loss-sensitive. Safety of these applications is compromised by packet loss, both on the network and by the applications themselves that will drop packets exceeding delay bounds. However, the provision of this required Quality of Service (QoS) must overcome issues relating to the lack of reliable existing infrastructure, conservation of safety-certified functionality. It must also overcome issues relating to the layer-2 dynamics with causal factors including hidden transmitters and fading channels. This thesis proposes that bounded maximum delay and safety-critical application support can be achieved by using cross-layer middleware. Such an approach benefits from the use of established protocols without requiring modifications to safety-certified ones. This research proposes ROAM: a novel, adaptive and scalable cross-layer Real-time Optimising Ad hoc Middleware framework for the provision and maintenance of performance guarantees in self-configuring MANETs. The ROAM framework is designed to be scalable to new optimisers and MANET protocols and requires no modifications of protocol functionality. Four original contributions are proposed: (1) ROAM, a middleware entity abstracts information from the protocol stack using application programming interfaces (APIs) and that implements optimisers to monitor and autonomously tune conditions at protocol layers in response to dynamic network conditions. The cross-layer approach is MANET protocol generic, using minimal imposition on the protocol stack, without protocol modification requirements. (2) A horizontal handoff optimiser that responds to time-varying link quality to ensure optimal and most robust channel usage. (3) A distributed contention reduction optimiser that reduces channel contention and related delay, in response to detection of the presence of a hidden transmitter. (4) A feasibility evaluation of the ROAM architecture to bound maximum delay and jitter in a comprehensive range of ns2-MIRACLE simulation scenarios that demonstrate independence from the key causes of network dynamics: application setting and MANET configuration; including mobility or topology. Experimental results show that ROAM can constrain end-to-end delay, jitter and packet loss, to support real-time applications with critical timing requirements.
APA, Harvard, Vancouver, ISO, and other styles
39

NEFFELLI, MARCO. "COVARIANCE MATRIX CONSTRUCTION AND ESTIMATION: CRITICAL ANALYSES AND EMPIRICAL CASES FOR PORTFOLIO APPLICATIONS." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/945803.

Full text
Abstract:
The thesis contributes to the financial econometrics literature by improving the estimation of the covariance matrix among financial time series. To such aim, existing econometrics tools have been investigated and improved, while new ones have been introduced in the field. The main goal is to improve portfolio construction for financial hedging, asset allocation and interest rates risk management. The empirical applicability of the proposed innovations has been tested trough several case studies, involving real and simulated datasets. The thesis is organised in three main chapters, each of those dealing with a specific financial challenge where the covariance matrix plays a central role. Chapter 2 tackles on the problem of hedging portfolios composed by energy commodities. Here, the underlying multivariate volatility among spot and futures securities is modelled with multivariate GARCH models. Under this specific framework, we propose two novel approaches to construct the covariance matrix among commodities, and hence the resulting long-short hedging portfolios. On the one hand, we propose to calculate the hedge ratio of each portfolio constituent to combine them later on in a unique hedged position. On the other hand, we propose to directly hedge the spot portfolio, incorporating in such way investor’s risk and return preferences. Trough a comprehensive numerical case study, we assess the sensitivity of both approaches to volatility and correlation misspecification. Moreover, we empirically show how the two approaches should be implemented to hedge a crude oil portfolio. Chapter 3 focuses on the covariance matrix estimation when the underlying data show non–Normality and High–Dimensionality. To this extent, we introduce a novel estimator for the covariance matrix and its inverse – the Minimum Regularised Covariance Determinant estimator (MRCD) – from chemistry and criminology into our field. The aim is twofold: first, we improve the estimation of the Global Minimum Variance Portfolio by exploiting the MRCD closed form solution for the covariance matrix inverse. Trough an extensive Monte Carlo simulation study we check the effectiveness of the proposed approach in comparison to the sample estimator. Furthermore, we take on an empirical case study featuring five real investment universes characterised by different stylised facts and dimensions. Both simulation and empirical analysis clearly demonstrate the out–of–sample performance improvement while using the MRCD. Second, we turn our attention on modelling the relationships among interest rates, comparing five covariance matrix estimators. Here, we extract the principal components driving the yield curve volatility to give important insights on fixed income portfolio construction and risk management. An empirical application involving the US term structure illustrates the inferiority of the sample covariance matrix to deal with interest rates. In chapter 4, we improve the shrinkage estimator for four risk-based portfolios. In particular, we focus on the target matrix, investigating six different estimators. By the mean of an extensive numerical example, we check the sensitivity of each risk-based portfolio to volatility and correlation misspecification in the target matrix. Furthermore, trough a comprehensive Monte Carlo experiment, we offer a comparative study of the target estimators, testing their ability in reproducing the true portfolio weights. Controlling for the dataset dimensionality and the shrinkage intensity, we find out that the Identity and Variance Identity target estimators are the best targets towards which to shrink, always holding good statistical properties.
APA, Harvard, Vancouver, ISO, and other styles
40

Hubbard, Jerry. "Postsecondary Instructor Attitudes Toward Tablet Use for Collaboration and Critical Thinking Development." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4143.

Full text
Abstract:
Although research has identified critical thinking (CT) as an objective of higher education, limited quantitative research has focused on how postsecondary instructors view using handheld devices for classroom collaboration to support CT. There are studies examining how the use of tablet technologies influence collaborative learning (CL), showing a link between CL and CT, and connecting CT to academic achievement. However, understanding how instructors perceive the intersection of these factors has not been well studied. Applying Vygotsky's social cognitive theory as a foundation of CL, using adapted questions from two questionnaires (Technology Acceptance Model and Cooperative Learning Implementation) and two frameworks, this quantitative survey study examined the relationship between tablet application and implementation of CL, and then between CL implementation and the development of CT dispositions (CTD). An email with a link to the survey was sent to a population of 1,932 instructors in a professional education technology organization. From a sample of 59, the key findings indicated instructors accepted the use and usefulness of tablets in the classroom, and used applications for completing collaborative tasks. The Pearson's product moment correlations between tablets and CL, acceptance and implementation appear to be affected by instructor's professional views and teaching practices. Perceptions about the development of CTD were positive with limitations of statistical significance. Results of this study may provide insights into using tablets in effective ways to enhance learning outcomes as one social benefit. Improving the CT of students may support developing citizens who contribute to communities and society in positive ways as lifelong learners.
APA, Harvard, Vancouver, ISO, and other styles
41

Brown, Mark Anthony. "An investigation of three-dimensional displays for real-time, safety-critical command/control applications : with application to air traffic control." Thesis, Queen Mary, University of London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cardany, John Paul. "Node to processor allocation for large grain data flow graphs in throughput-critical applications." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA283607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yao, Xudong. "Minimax methods for finding multiple saddle critical points in Banach spaces and their applications." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/2732.

Full text
Abstract:
This dissertation was to study computational theory and methods for ?nding multiple saddle critical points in Banach spaces. Two local minimax methods were developed for this purpose. One was for unconstrained cases and the other was for constrained cases. First, two local minmax characterization of saddle critical points in Banach spaces were established. Based on these two local minmax characterizations, two local minimax algorithms were designed. Their ?ow charts were presented. Then convergence analysis of the algorithms were carried out. Under certain assumptions, a subsequence convergence and a point-to-set convergence were obtained. Furthermore, a relation between the convergence rates of the functional value sequence and corresponding gradient sequence was derived. Techniques to implement the algorithms were discussed. In numerical experiments, those techniques have been successfully implemented to solve for multiple solutions of several quasilinear elliptic boundary value problems and multiple eigenpairs of the well known nonlinear p-Laplacian operator. Numerical solutions were presented by their pro?les for visualization. Several interesting phenomena of the solutions of quasilinear elliptic boundary value problems and the eigenpairs of the p-Laplacian operator have been observed and are open for further investigation. As a generalization of the above results, nonsmooth critical points were considered for locally Lipschitz continuous functionals. A local minmax characterization of nonsmooth saddle critical points was also established. To establish its version in Banach spaces, a new notion, pseudo-generalized-gradient has to be introduced. Based on the characterization, a local minimax algorithm for ?nding multiple nonsmooth saddle critical points was proposed for further study.
APA, Harvard, Vancouver, ISO, and other styles
44

Powell, Thomas. "Understanding sub-critical water hydrolysis of proteins by mass : applications in proteomics and biorefining." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8697/.

Full text
Abstract:
Sub-critical water (SCW) hydrolysis has previously been used in the extraction of antioxidant compounds from a variety of food wastes, in-particular those which are rich in protein. The brewing industry generates high volumes of waste. The most abundant component, brewers' spent grain (BSG), is high in protein content. The work presented in this thesis aimed to investigate the SCW extraction of antioxidant compounds from BSG. Whilst SCW hydrolysis has proved effective in the extraction of antioxidants from a range of compounds its mechanism of action has not been thoroughly investigated. High performance liquid chromatography (HPLC) coupled to tandem mass spectrometry (MS/MS) was used to analyse peptide production from the SCW hydrolysis of proteins. Sites of cleavage were identified and a mechanism of action of SCW on proteins was postulated. The results from this analysis raised the possibility of using SCW as a proteolytic reagent during proteomics experiments. Approaches for SCW-based proteomics were further explored by investigating SCW induced amino acid side chain modifications to aid peptide identification. To assess the antioxidant capacity of mixtures generated via SCW hydrolysis oxygen radical absorbance capacity, reducing power and comet assays were used. The decomposition products responsible for antioxidant capacity were characterised using MS/MS.
APA, Harvard, Vancouver, ISO, and other styles
45

Chirindo, Tasimba Denford David. "An open vendor agnostic fog computing framework for mission critical and data dense applications." Master's thesis, Faculty of Engineering and the Built Environment, 2018. http://hdl.handle.net/11427/29984.

Full text
Abstract:
Digital innovation from the Internet of Things (IoT), Artificial Intelligence, Tactile Internet and Industry 4.0 applications is transforming the way we work, commute, shop and play. Current deployment strategies of these applications emphasize mandatory cloud connectivity. However, this is not feasible in many real-world situations particularly where data dense and mission critical applications with stringent requirements are concerned. Cloud computing offers unlimited on-demand computing, storage and networking power for industry to leverage. However, as its scope and scale continues to expand, its limitations like high latency, accessibility, security and compliance shortcomings prevent its greater use and applicability particularly in scenarios where real-time communication and the quality of rapid computing delivered is a necessity. Fog computing hopes to bridge this gap by introducing an intermediary computing layer between end users and the cloud. At present, architectures for fog computing exist in specialized areas with current implementations being proprietary, vendor-locked and requiring dramatic and non-transferable changes to hardware and software to meet vendor requirements. Moreover, fog computing is still quite a recent area which makes the state of the art incipient regarding architecture definitions, middleware and real-world implementations. There is therefore an urgent need for standardization of these technologies. This is of paramount importance as otherwise, there will exist multiple and not necessarily compatible solutions which could lead to a fragmented marketplace that would fail to grow. In an effort to address these limitations in current fog architectures, this dissertation proposes and implements a novel fog computing architecture that aligns the reference architectures from a leading industry consortium, OpenFog, and a leading standards setting organization, the European Telecommunications Standards Institute (ETSI). This cooperation framework from industry, academia and regulatory institute aims to make it easier for both application developers and infrastructure solution providers to develop towards a common, open and interoperable fog computing environment. The proposed framework has the following attributes: modular, plug-in design, generic, open, standards compliant, vendor agnostic and runs on high volume standard hardware whilst preserving the benefits offered by public clouds such as containerization, virtualization, orchestration, manageability and efficiency. Moreover, for the various stakeholders in the fog value chain where it is key to strike a balance between information technology and business operations, this thesis tenders insights and best practices to help achieve these multiple and sometimes competing goals. The proposed framework was implemented in a testbed environment made up entirely of free and open source software, therefore creating a convenient point of departure for further research by others. Two geographically distributed fog node data centres and a cloud management and orchestration tool were setup in the testbed. While this evaluation framework and practical implementation demonstrated proof of concept, further evaluations were conducted to benchmark the performance against existing alternative solutions. These evaluations were based on a prototype industrial IoT application that was deployed on the testbed to evaluate the impact of the Open Vendor Agnostic Fog Framework (OVAFF) solution on application performance. The implementation showed that the proposed OVAFF solution is feasible, implementable and supports distributed edge cloud data centres. Results from the prototype application showed that OVAFF can extremely provide up to tenfold throughput and ultra-low latency, jitter and packet loss rate better than the remote clouds. Moreover, there is more superiority exhibited by the OVAFF for other non-performance based attributes like data reduction, compliance and geographical locality of control. In addition, the results also pointed towards the viability of open business models like federated infrastructure sharing and a fog market place in the fog ecosystem. Finally, this thesis tackled the highlighted open challenges in current fog systems such as orchestration, distribution, tiering, heterogeneity and resilience; which were outlined in the research motivation and problem definition.
APA, Harvard, Vancouver, ISO, and other styles
46

Ojdanic, Milos. "SYSTEMATIC LITERATURE REVIEW OF SAFETY-RELATED CHALLENGES FOR AUTONOMOUS SYSTEMS IN SAFETY-CRITICAL APPLICATIONS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-43980.

Full text
Abstract:
An increased focus on the development of autonomous safety-critical systems requiresmore attention at ensuring safety of humans and the environment. The mainobjective of this thesis is to explore the state of the art and to identify the safetyrelatedchallenges being addressed for using autonomy in safety-critical systems. Inparticular, the thesis explores the nature of these challenges, the different autonomylevels they address and the type of safety measures as proposed solutions. Above all,we focus on the safety measures by a degree of adaptiveness, time of being activeand their ability of decision making. Collection of this information is performedby conducting a Systematic Literature Review of publications from the past 9 years.The results showed an increase in publications addressing challenges related to theuse of autonomy in safety-critical systems. We managed to identify four high-levelclasses of safety challenges. The results also indicate that the focus of research wason finding solutions for challenges related to full autonomous systems as well assolutions that are independent of the level of autonomy. Furthermore, consideringthe amount of publications, results show that non-learning solutions addressing theidentified safety challenges prevail over learning ones, active over passive solutionsand decisive over supportive solutions.
APA, Harvard, Vancouver, ISO, and other styles
47

Sanyal, Sourav. "Predicting Critical Warps in Near-Threshold GPGPU Applications Using a Dynamic Choke Point Analysis." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7545.

Full text
Abstract:
General purpose graphics processing units (GP-GPU), owing to their enormous thread-level parallelism, can significantly improve the power consumption at the near-threshold (NTC) operating region, while offering close to a super-threshold performance. However, process variation (PV) can drastically reduce the GPU performance at NTC. In this work, choke points—a unique device-level characteristic of PV at NTC—that can exacerbate the warp criticality problem in GPUs have been explored. It is shown that the modern warp schedulers cannot tackle the choke point induced critical warps in an NTC GPU. Additionally, Choke Point Aware Warp Speculator, a circuit-architectural solution is proposed to dynamically predict the critical warps in GPUs, and accelerate them in their respective execution units. The best scheme achieves an average improvement of ∼39% in performance, and ∼31% in energy-efficiency, over one state-of-the-art warp scheduler, across 15 GPGPU applications, while incurring marginal hardware overheads.
APA, Harvard, Vancouver, ISO, and other styles
48

Enniss, Harris. "A Refined Saddle Point Theorem and Applications." Scholarship @ Claremont, 2012. https://scholarship.claremont.edu/hmc_theses/33.

Full text
Abstract:
Under adequate conditions on $g$, we show the density in $L^2((0,\pi),(0,2\pi))$ of the set of functions $p$ for which \begin{equation*} u_{tt}(x,t)-u_{xx}(x,t)= g(u(x,t)) + p(x,t) \end{equation*} has a weak solution subject to \begin{equation*} \begin{aligned} u(x,t)&=u(x,t+2\pi)\\ u(0,t)&=u(\pi,t)=0. \end{aligned} \end{equation*} To achieve this, we prove a Saddle Point Principle by means of a refined variant of the deformation lemma of Rabinowitz. Generally, inf-sup techniques allow the characterization of critical values by taking the minimum of the maximae on some particular class of sets. In this version of the Saddle Point Principle, we introduce sufficient conditions for the existence of a saddle-structure which is not restricted to finite-dimensional subspaces.
APA, Harvard, Vancouver, ISO, and other styles
49

Hahn, Klaus. "Critical investigation of the mobile information technology expert's perspective on the impact of the mobile application development within the German financial market and service industry." Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/979068.

Full text
Abstract:
Just as the Internet did before, mobile information technology (IT) is radically changing the way we interact with the world. Already there have been many innovative applications of this technology based on the unique attributes of mobiles. IT companies and IT service providers rely on meaningful and provable information about the influence of technical possibilities and the views of IT experts on consumer needs. The purpose and objectives of this research are to investigate the influence of mobile IT solution design and architecture on consumer behaviour related to a specific business area – the German financial market and services industry. The main research question was: “What shapes the development of mobile IT applications?” In thisresearch, the focus was on the technical context; that means the key drivers of the technological development, as they are named (in alphabetical order): efficiency, engagement, flexibility, security, simplicity, and visibility. The literature review identifiedthe factors influencing technology development and related these to the consumer behaviour theory. The research methodology is based on the phenomenological approach in which the ‘lived' experiences were described from the perspectives of interviewees'. Qualitative data were gathered related to the key drivers to understand what kind of influence factors are taking effect. Based on the assumption that IT experts will provideessential and significant inputs regarding the technical aspects, 18 semi-structured interviews were conducted. From the in-depth expert interviews, the key elements (variables) were determined and a conceptual framework was evolved with respect to literature and with the aim of answering the research question. Based on the findings from field data, the framework forms a foundation for a retrospective analysis to study the influence factors and to emerge remarkably consistent patterns that influence the development of mobile solutions. Furthermore,the proposed framework provides the basis to describe the effect of technological development on an existing information system theory. The thesis closes the gap left by the lack of a technical point of view in recent literature. This research identified consumer opinions and behaviours from the perspective of IT experts with regard to the perceived values and usability of mobile IT. The contribution to practice is that this study will bring together the technical viewpointand the viewpoints of consumers. In addition, this study will provide a set of recommendations to clarify interactions between the architecture and design of mobile IT and the consumer behaviour.
APA, Harvard, Vancouver, ISO, and other styles
50

Cross, David M. "Usefulness of compile-time restructuring of large grain data flow programs in throughput-critical applications." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA274915.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and M.S. in Computer Science) Naval Postgraduate School, September 1993.
Thesis advisor(s): Shridhar B. Shukla ; Amr Zaky. "September 1993." Cover title: Usefulness of ... of LGDF programs ... Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography