Dissertations / Theses on the topic 'Input, Output and Data Devices'

To see the other types of publications on this topic, follow the link: Input, Output and Data Devices.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Input, Output and Data Devices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Romeike, Ralf. "Output statt Input." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2013/6431/.

Full text
Abstract:
Die in der Fachdidaktik Informatik im Zusammenhang mit den Bildungsstandards seit Jahren diskutierte Outputorientierung wird mittelfristig auch für die Hochschullehre verbindlich. Diese Änderung kann als Chance aufgefasst werden, aktuellen Problemen der Informatiklehre gezielt entgegenzuwirken. Basierend auf der Theorie des Constructive Alignment wird vorgeschlagen, im Zusammenhang mit der Outputorientierung eine Abstimmung von intendierter Kompetenz, Lernaktivität und Prüfung vorzunehmen. Zusätzlich profitieren Lehramtsstudenten von den im eigenen Lernprozess erworbenen Erfahrungen im Umgang mit Kompetenzen: wie diese formuliert, erarbeitet und geprüft werden. Anforderungen an die Formulierung von Kompetenzen werden untersucht, mit Beispielen belegt und Möglichkeiten zur Klassifizierung angeregt. Ein Austausch in den Fachbereichen und Fachdidaktiken über die individuell festgelegten Kompetenzen wird vorgeschlagen, um die hochschuldidaktische Diskussion zu bereichern.
APA, Harvard, Vancouver, ISO, and other styles
2

Löfving, Erik. "Organizing physical flow data : from input-output tables to data warehouses /." Linköping : Dept. of Mathematics, Univ, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/stat5s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McLaughlin, Anne Collins. "Attentional demands on input devices in a complex task." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/30305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hernańdez, Correa Evelio. "Control of nonlinear systems using input-output information." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/11176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Garriga, Berga Carles. "A New Approach to the Synthesis of Fuzzy Systems from Input-Output Data." Doctoral thesis, Universitat Ramon Llull, 2005. http://hdl.handle.net/10803/9147.

Full text
Abstract:
Fuzzy logic has been applied successfully to systems modeling for ages. One of its main advantages is that it provides an understandable knowledge representation. Nevertheless, most investigations have focused their efforts on achieving accurate models and by doing so, they have omitted the linguistic capabilities of fuzzy logic.

This thesis researches into the issues related to intelligible fuzzy models, because since science demonstrated the use of fuzzy logic when searching optimal models in terms of error (in fact a fuzzy model is a universal approximator), some but few investigators have focused their efforts in order to achieve really intelligible models in spite of losing some accuracy.

In this work we propose a whole methodology able to find an intelligible fuzzy model in a local manner (rule by rule) from input-output data. In this sense we find the number and position of the necessary fuzzy sets and also the linguistic rules related to them. For this purpose we have developed a hierarchical process which takes into account several steps and techniques, some of which are original contributions.

The resulting method is very simple and also intelligible. Therefore, this solution performs the final models with a low computational cost, but furthermore, allows the tuning of its different options depending on the nature of the problem and the characteristics of the users.

In this thesis we explain the whole methodology and illustrate its advantages (but also its problems) with several examples which are benchmarks in most cases.
APA, Harvard, Vancouver, ISO, and other styles
6

Bailey, Alastair S. "The estimation of input-output coefficients for agriculture from whole farm accounting data." Thesis, University of Reading, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Welmers, Laura Hazel. "The implementation of an input/output consistency checker for a requirements specification document." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

YAMAMOTO, Shuichiro. "Reconstructing Data Flow Diagrams from Structure Charts Based on the Input and Output Relationship." Institute of Electronics, Information and Communication Engineers, 1995. http://hdl.handle.net/2237/15017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Shaoshi. "Detection for multiple-input multiple-output systems : probabilistic data association and semidefinite programming relaxation." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/360710/.

Full text
Abstract:
As a highly effective physical-layer interference management technique, the joint detection of a vector of non-orthogonal information-bearing symbols simultaneously transmitted over multiple-input multiple-output (MIMO) channels is of fundamental importance for high throughput digital communications. This is because the generic mathematical model of MIMO detection underpins a wide range of relevant applications including (but not limited to) the equalization of dispersive band-limited channels imposing intersymbol interference (ISI), the multiuser detection (MUD) in code-division multiple-access (CDMA) systems and the multi-stream detection for multiple-antenna based spatial-division multiplexing (SDM) systems. With the evolution of wireless networks, the “virtual MIMO” concept was conceived,which is also described by the generic mathematical MIMO model. MIMO detection becomes even more important, because the achievable performance of spectrum-efficient wireless networks is typically interference-limited, rather than noise-limited. In this thesis, a pair of detection methods that are well-suited for large-scale MIMO systems are investigated. The first one is the probabilistic data association (PDA) algorithm, which is essentially an interference-modelling approach based on iterative Gaussian approximation. The second one is the semidefinite programming (SDP) relaxation based approach, which approximates the optimal maximum likelihood (ML) detection problem to a convex optimization problem. The main advantage of both methods is that they impose a moderate computational complexity that increases as a polynomial function of the problem size, while providing competitive performance. The contributions of this thesis can be broadly categorized into two groups. The first group is related to the design of virtually antipodal (VA) detection of rectangular M-ary quadrature amplitude modulation (M-QAM) symbols transmitted in SDM-MIMO systems. As a foundation, in the first parts of Chapter 2 and Chapter 3 the rigorous mathematical relationship between the vector space of transmitted bits and that of transmitted rectangular M-QAM symbols is investigated. Both linear and nonlinear bit-to-symbol mappings are considered. It is revealed that the two vector spaces are linked by linear/quasi-linear transformations, which are explicitly characterized by certain transformation matrices. This formulation may potentially be applicable to many signal processing problems of wireless communications. For example, when used for detection of rectangular M-QAM symbol vector, it enables us to transform the conventional three-step “signal-to-symbol-to-bits” decision process to a direct “signal-to-bits” decision process. More specifically, based on the linear VA transformation, in Chapter 2 we propose a unified bit-based PDA (B-PDA) detection method for linear natural mapping aided rectangular M-QAM symbols transmitted in SDM-MIMO systems. We show that the proposed linear natural mapping based B-PDA approach attains an improved detection performance, despite dramatically reducing the omputational complexity in contrast to the conventional symbol-based PDA detector. Furthermore, in Chapter 3 a quasi-linear VA transformation based generalized low-complexity semidefinite programming relaxation (SDPR) detection approach is proposed for Gray-coded rectangular M-QAM signalling over MIMO channels. Compared to the linear natural mapping based B-PDA of Chapter 2, the quasi-linear VA transformation based SDPR method is capable of directly deciding on the information bits of the ubiquitous Gray-mapping aided rectangular M-QAM by decoupling the M-QAM constellation into several 4-QAM constellations. Moreover, it may be readily combined with the low-complexity bit-flipping based “hill climbing” technique for exploiting the unequal error protection (UEP) property of rectangular M-QAM, and the resultant VA-SDPR detector achieves the best bit-error rate (BER) performance among the known SDPR-based MIMO detectors conceived for high-order QAM constellations, while still maintaining the same order of polynomial-time worst-case computational complexity. Additionally, we reveal that the linear natural mapping based VA detectors attain the same performance provided by the binary reflected Gray mapping based VA detectors, but the former are simpler for implementation. Therefore, only if there are other constraints requiring using the nonlinear Gray mapping, it is preferable to use the linear natural mapping rather than the Gray mapping, when the VA detectors are used in uncoded MIMO systems. The second group explores the application of the PDA-aided detectors in some more sophisticated systems that are of great interest to the wireless research community. In particular, the design of iterative detection and decoding (IDD) schemes relying on the proposed low complexity PDA methods is investigated for the turbo-coded MIMO systems in Chapter 4 and 5. It has conventionally been regarded that the existing PDA algorithms output the estimated symbol-wise a posteriori probabilities (APPs) as soft information. In Chapter 4 and 5, however, we demonstrate that these probabilities are not the true APPs in the rigorous mathematical sense, but a type of nominal APPs, which are unsuitable for the classic architecture of IDD receivers. Moreover, our study shows that the known methods of calculating the bit-wise extrinsic logarithmic likelihood ratios (LLRs) are no longer applicable to the conventional PDA based methods when detecting M-ary modulation symbols. Additionally, the existing PDA based MIMO detectors typically operate purely in the probabilistic domain. Therefore, the existing PDA methods are not readily applicable to IDD receivers. To overcome this predicament, in Chapter 4 and Chapter 5 we propose the approximate Bayes’ theorem based logarithmic domain PDA (AB-Log-PDA) and the exact Bayes’ theorem based logarithmic domain PDA (EB-Log-PDA) detectors, respectively. We present the approaches of calculating the bit-wise extrinsic LLRs for both the AB-Log-PDA and the EB-Log-PDA, which makes them well-suited for IDD receivers. Furthermore, we demonstrate that invoking inner iterations within the PDA algorithms – which is common practice in PDA-aided uncoded MIMO systems – would actually degrade the IDD receiver’s performance, despite significantly increasing its overall computational complexity. Additionally, we investigate the relationship between the extrinsic LLRs of the proposed EB-Log-PDA and of the AB-Log-PDA. It is also shown that both the proposed AB-Log-PDA- and the EB-Log-PDA-based IDD schemes dispensing with any inner PDA iterations are capable of achieving a performance comparable to that of the optimal maximum a posteriori (MAP) detector based IDD receiver in the scenarios considered, despite their significantly lower computational complexity. Finally, in Chapter 6, a base station (BS) cooperation aided distributed soft reception scheme using the symbol-based PDA algorithm and soft combining (SC) is proposed for the uplink of multiuser multicell MIMO systems. The realistic 19-cell hexagonal cellular model relying on radical unity frequency reuse (FR) is considered, and local cooperation based message passing is used instead of a global message passing chain for the sake of reducing the backhaul traffic. We show that despite its moderate complexity and backhaul traffic, the proposed distributed PDA (DPDA) aided SC (DPDA-SC) reception scheme significantly outperforms the conventional non-cooperative benchmarkers. Furthermore, since only the index of the quantized converged soft information has to be exchanged between collaborative BSs for SC, the proposed DPDA-SC scheme is relatively robust to the quantization errors of the soft information exchanged. As an appealling benefit, the backhaul traffic is dramatically reduced at a negligible performance degradation.
APA, Harvard, Vancouver, ISO, and other styles
10

Holmes, William Paul. "Voice input for the disabled /." Title page, contents and summary only, 1987. http://web4.library.adelaide.edu.au/theses/09ENS/09ensh749.pdf.

Full text
Abstract:
Thesis (M. Eng. Sc.)--University of Adelaide, 1987.
Typescript. Includes a copy of a paper presented at TADSEM '85 --Australian Seminar on Devices for Expressive Communication and Environmental Control, co-authored by the author. Includes bibliographical references (leaves [115-121]).
APA, Harvard, Vancouver, ISO, and other styles
11

Rudianto, Rudi. "ANALYSIS & DESIGN OF IMPROVED MULTIPHASE INTERLEAVING DC-DC CONVERTER WITH INPUT-OUTPUT BYPASS CAPACITOR." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/102.

Full text
Abstract:
As the transistor count per chip in computer microprocessors surpasses one billion, the semiconductor industry has become more and more concerned with meeting processor’s power requirements. This poses a design challenge for the power supply module, especially when the processor operates at low voltage range. For example, the electrical requirement for the newest Intel microprocessors has exceeded 100A with an input voltage of approximately 1V. To overcome this problem, multiphase DC-to-DC converters encased in a voltage regulator module (VRM) have become the standard means of supplying power to computer microprocessor. This study proposes a new topology for the multiphase DC-to-DC converter for powering microprocessors. The new topology accepts 12 V input, and outputs a steady state voltage of 1 V with a maximum output current of 40 A. The proposed topology aims to improve the input and output characteristics of the basic multiphase “buck” converter, along with an improved efficiency, line regulation, and load regulation. To explore the feasibility of such a topology, open-loop computer simulation and closed-loop hardware tests were performed. On open-loop simulation, OrCad pspice was used to verify design calculations and evaluate its performance. Then the closed-loop hardware prototype was tested to compare the circuit performance with those values obtained from simulation. The result shows the proposed topology improvement of efficiency, board size, output ripple, and regulations.
APA, Harvard, Vancouver, ISO, and other styles
12

Au, Kwok Shum. "Multiple-input multiple-output detection in wireless communications and data storage systems : performance and implementation issues /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20AU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lamprecht, Erwin Cornelius. "Multiple-Input Single-Output system identification techniques using the pebble bed modular reactor data / E.C. Lamprecht." Thesis, North-West University, 2004. http://hdl.handle.net/10394/639.

Full text
Abstract:
Models are used to describe the dynamic behaviour of a system, to predict future outputs of the system and are useful when designing certain control schemes. An effective control scheme could be used to influence the dynamic behaviour of a system in such a way that it exhibits more desirable dynamic behaviour. A control system could be designed to increase the efficiency of a system. This makes it obvious that accurate models are very useful. The focus of this study is to use Multiple-Input Single-Output (MISO) system identification techniques on data obtained from the Flownex simulation package. These techniques are used to obtain a (MISO) mathematical model for the Pebble Bed Modular Reactor (PBMR). MISO system identification techniques are used in this project to study the effect that the inputs have on each other. This information helps in the understanding of processes within the system. The reason for studying the MIS0 systems and not the Single-Input Single-Output (SISO) systems is because the field of interest focuses on the effects the inputs have on each other.
Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2005.
APA, Harvard, Vancouver, ISO, and other styles
14

Battenberg, Janice K. "Selective attention : a comparison of two computer input devices utilizing a traditional keyboard vs. a touch sensitive screen." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/546125.

Full text
Abstract:
The purpose of the study was to determine the efficacy of touch sensitive computer screens in focusing attention on a specific academic task. Forty nondelayed and forty delayed kindergarteners were compared as to their rates of task completion and performances on traditional computer keyboards versus touch sensitive screens. Two eight cell repeated measures experimental designs were used to compare the selective attention process of the nondelayed and delayed pupils. The two dependent variables manipulated in the study were two types of computer input device and the two developmental levels of the subjects. The dependent variable consisted of the number of previously unlearned French number words mastered through four performance measures involving speed, computer recall, noncomputer recall and noncomputer recognition.FINDINGSAs analyzed by a three factor MANOVA, a significant difference in the rate of task completion was shown in favor of the touch screens for all subjects in touching the sequential letters of the alphabet. Although there appeared to be no significant differences in noncomputer recall and recognition post tests, a four factor MANOVA verified significant differences in the subjects' computer recall post tests.CONCLUSIONSThe data supports the conclusion that the use of the touch sensitive screen facilitates the focus of attention (selective attention) on specific academic tasks and thus increases the rate of learning and degree of integration of new information. The degree of compatibility between the learner and the computer input device is greater with touch screens than with traditional keyboards for both nondelayed and delayed kindergarteners.The speed of completing the sequential touching of the alphabet letters was significantly faster for the touch screen than the traditional keyboard input. For mastery of information learned, the analyzed findings suggest a higher degree of recall for information learned through the touch screen intervention over the same instructional tasks with keyboard input.As the result of this key study, it is suggested future research investigations will expand the use of computers beyond educational drill, repetition, and games. Future investigations into the relationships between cognitive processing and the individualization of CAI could involve various age ranges, exceptionalities, and developmental comparisons.
Department of Special Education
APA, Harvard, Vancouver, ISO, and other styles
15

Meterelliyoz, Kuyzu Melike. "Variance parameter estimation methods with re-use of data." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26490.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Alexopoulos, Christos; Committee Co-Chair: Goldsman, David; Committee Member: Kim, Seong-Hee; Committee Member: Shapiro, Alexander; Committee Member: Spruill, Carl. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
16

Schweizer, Andreas. "Analysis and optimisation of stable matching in combined input and output queued switches." Western Australian Telecommunications Research Institute, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0078.

Full text
Abstract:
Output queues in network switches are known to provide a suitable architecture for scheduling disciplines that need to provide quality of service (QoS) guarantees. However, today’s memory technology is incapable of meeting the speed requirements. Combined input and output queued (CIOQ) switches have emerged as one alternative to address the problem of memory speed. When a switch of this architecture uses a stable matching algorithm to transfer packets across the switch fabric, an output queued (OQ) switch can be mimicked exactly with a speedup of only two. The use of a stable matching algorithm typically requires complex and time-consuming calculations to ensure the behaviour of an OQ switch is maintained. Stable matching algorithms are well studied in the area in which they originally appeared. However, little is presently known on how the stable matching algorithm performs in CIOQ switches and how key parameters are affected by switch size, traffic type and traffic load. Knowledge of how these conditions affect performance is essential to judge the practicability of an architecture and to provide useful information on how to design such switches. Until now, CIOQ switches were likely to be dismissed due to the high complexity of the stable matching algorithm when applied to other applications. However, the characteristics of a stable matching algorithm in a CIOQ switch have not been thoroughly analysed. The principal goal of this thesis is to identify the conditions the stable matching algorithm encounters in a CIOQ switch under realistic operational scenarios. This thesis provides accurate mathematical models based on Markov chains to predict the value of key parameters that affect the complexity and runtime of a stable matching algorithm in CIOQ switches. The applicability of the models is then backed up by simulations. The results of the analysis quantify critical operational parameters, such as the size and number of preference lists and runtime complexity. These provide detailed insights into switch behaviour and useful information for switch designs. Major conclusions to be drawn from this analysis include that the average values of the key parameters of the stable matching algorithm are feasibly small and do not strongly correlate with switch size, which is contrary to the behaviour of the stable matching ii algorithm in its original application. Furthermore, although these parameters have wide theoretical ranges, the mean values and standard deviations are found to be small under operational conditions. The results also suggest that the implementation becomes very versatile as the completion time of the stable matching algorithm is not strongly correlated to the network traffic type; that is, the runtime is minimally affected by the nature of the traffic.
APA, Harvard, Vancouver, ISO, and other styles
17

Seevinck, Jennifer. "Emergence in interactive art." Thesis, University of Technology, Sydney, 2011.

Find full text
Abstract:
This thesis is concerned with creating and evaluating interactive art systems that facilitate emergent participant experiences. For the purposes of this research, interactive art is the computer based arts involving physical participation from the audience, while emergence is when a new form or concept appears that was not directly implied by the context from which it arose. This emergent ‘whole’ is more than a simple sum of its parts. The research aims to develop understanding of the nature of emergent experiences that might arise during participant interaction with interactive art systems. It also aims to understand the design issues surrounding the creation of these systems. The approach used is Practice-based, integrating practice, evaluation and theoretical research. Practice used methods from Reflection-in-action and Iterative design to create two interactive art systems: Glass Pond and +-now. Creation of +-now resulted in a novel method for instantiating emergent shapes. Both art works were also evaluated in exploratory studies. In addition, a main study with 30 participants was conducted on participant interaction with +-now. These sessions were video recorded and participants were interviewed about their experience. Recordings were transcribed and analysed using Grounded theory methods. Emergent participant experiences were identified and classified using a taxonomy of emergence in interactive art. This taxonomy draws on theoretical research. The outcomes of this Practice-based research are summarised as follows. Two interactive art systems, where the second work clearly facilitates emergent interaction, were created. Their creation involved the development of a novel method for instantiating emergent shapes and it informed aesthetic and design issues surrounding interactive art systems for emergence. A taxonomy of emergence in interactive art was also created. Other outcomes are the evaluation findings about participant experiences, including different types of emergence experienced and the coding schemes produced during data analysis.
APA, Harvard, Vancouver, ISO, and other styles
18

Heasman, Ray Edward. "The implementation of a core architecture for geophysical data acquisition." Thesis, Rhodes University, 2000. http://hdl.handle.net/10962/d1005256.

Full text
Abstract:
This thesis describes the design, development and implementation of the core hardware and software of a modular data acquisition system for geophysical data collection. The primary application for this system is the acquisition and realtime processing of seismic data captured in mines. This system will be used by a commercial supplier of seismic instrumentation, ISS International, as a base architecture for the development of future products. The hardware and software has been designed to be extendable and support distributed processing. The IEEE-1394 High Performance Serial Bus is used to communicate with other CPU modules or peripherals. The software includes a pre-emptive multitasking microkernel, an asynchronous mailbox-based message passing communications system, and a functional IEEE-1394 protocol stack. The reasons for the end design and implementation decisions are given, and the problems encountered in the development of this system are described. A critical assessment of the match between the requirements for the project and the functionality of the implementation is made.
APA, Harvard, Vancouver, ISO, and other styles
19

Tai, Yiyang. "Machine Learning Uplink Power Control in Single Input Multiple Output Cell-free Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279462.

Full text
Abstract:
This thesis considers the uplink of cell-free single input multiple output systems, in which the access points employ matched-filter reception. In this setting, our objectiveis to develop a scalable uplink power control scheme that relies only on large-scale channel gain estimates and is robust to changes in the environment. Specifically, we formulate the problem as max-min and max-product signal-to-interference ratio optimization tasks, which can be solved by geometric programming. Next, we study the performance of supervised and unsupervised learning approaches employing a feed-forward neural network. We find that both approaches perform close to the optimum achieved by geometric programming, while the unsupervised scheme avoids the pre-computation of training data that supervised learning would necessitate for every system or environment modification.
Den här avhandlingen tar hänsyn till upplänken till cellfria multipla utgångssystem med en enda ingång, där åtkomstpunkterna använder matchad filtermottagning. I den här inställningen är vårt mål att utveckla ett skalbart styrsystem för upplänkskraft som endast förlitar sig på storskaliga uppskattningar av kanalökningar och är robusta för förändringar i miljön. Specifikt formulerar vi problemet som maxmin och max-produkt signal-till-störningsförhållande optimeringsuppgifter, som kan lösas genom geometrisk programmering. Därefter studerar vi resultatet av övervakade och okontrollerade inlärningsmetoder som använder ett framåtriktat neuralt nätverk. Vi finner att båda metoderna fungerar nära det optimala som uppnås genom geometrisk programmering, medan det övervakade schemat undviker förberäkningen av träningsdata som övervakat inlärning skulle kräva för varje system- eller miljöändring.
APA, Harvard, Vancouver, ISO, and other styles
20

Shtarkalev, Bogomil Iliev. "Single data set detection for multistatic Doppler radar." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10556.

Full text
Abstract:
The aim of this thesis is to develop and analyse single data set (SDS) detection algorithms that can utilise the advantages of widely-spaced (statistical) multiple-input multiple-output (MIMO) radar to increase their accuracy and performance. The algorithms make use of the observations obtained from multiple space-time adaptive processing (STAP) receivers and focus on covariance estimation and inversion to perform target detection. One of the main interferers for a Doppler radar has always been the radar’s own signal being reflected off the surroundings. The reflections of the transmitted waveforms from the ground and other stationary or slowly-moving objects in the background generate observations that can potentially raise false alarms. This creates the problem of searching for a target in both additive white Gaussian noise (AWGN) and highly-correlated (coloured) interference. Traditional STAP deals with the problem by using target-free training data to study this environment and build its characteristic covariance matrix. The data usually comes from range gates neighbouring the cell under test (CUT). In non-homogeneous or non-stationary environments, however, this training data may not reflect the statistics of the CUT accurately, which justifies the need to develop SDS methods for radar detection. The maximum likelihood estimation detector (MLED) and the generalised maximum likelihood estimation detector (GMLED) are two reduced-rank STAP algorithms that eliminate the need for training data when mapping the statistics of the background interference. The work in this thesis is largely based on these two algorithms. The first work derives the optimal maximum likelihood (ML) solution to the target detection problem when the MLED and GMLED are used in a multistatic radar scenario. This application assumes that the spatio-temporal Doppler frequencies produces in the individual bistatic STAP pairs of the MIMO system are ideally synchronised. Therefore the focus is on providing the multistatic outcome to the target detection problem. It is shown that the derived MIMO detectors possess the desirable constant false alarm rate (CFAR) property. Gaussian approximations to the statistics of the multistatic MLED and GMLED are derived in order to provide a more in-depth analysis of the algorithms. The viability of the theoretical models and their approximations are tested against a numerical simulation of the systems. The second work focuses on the synchronisation of the spatio-temporal Doppler frequency data from the individual bistatic STAP pairs in the multistatic MLED scenario. It expands the idea to a form that could be implemented in a practical radar scenario. To reduce the information shared between the bistatic STAP channels, a data compression method is proposed that extracts the significant contributions of the MLED likelihood function before transmission. To perform the inter-channel synchronisation, the Doppler frequency data is projected into the space of potential target velocities where the multistatic likelihood is formed. Based on the expected structure of the velocity likelihood in the presence of a target, a modification to the multistatic MLED is proposed. It is demonstrated through numerical simulations that the proposed modified algorithm performs better than the basic multistatic MLED while having the benefit of reducing the data exchange in the MIMO radar system.
APA, Harvard, Vancouver, ISO, and other styles
21

Fischer, Elisabeth [Verfasser]. "Teaching Quality in Higher Education : A Field Study Investigating Effects between Input, Process, and Output Variables Using Multiple Data Sources / Elisabeth Fischer." Kassel : Universitätsbibliothek Kassel, 2019. http://d-nb.info/1201508843/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Abouzeid, Shadi. "A visual interactive grouping analysis tool (VIGAT) that takes mixed data types as input and provides visually interactive overlapping groups as output." Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chibesakunda, Mwelwa K. "A Methodology for Analyzing Power Consumption in Wireless Communication Systems." Thesis, University of Cape Town, 2004. http://pubs.cs.uct.ac.za/archive/00000102/.

Full text
Abstract:
Energy usage has become an important issue in wireless communication systems. The energy-intensive nature of wireless communication has spurred concern over how best systems can make the most use of this non-renewable resource. Research in energy-efficient design of wireless communication systems show that one of its challenges is that the overall performance of the system depends, in a coupled way, on the different submodules of the system i.e. antenna, power amplifier, modulation, error control coding, and network architecture. Network architecture implementation strategies offer protocol software implementors an opportunity of incorporating low-power strategies into the design of the network protocols used for data communication. This dissertation proposes a methodology that would allow a software protocol implementor to analyze the power consumption of a wireless communication system. The foundation of this methodology lies in the understanding of the formal specification of the wireless interface network architecture which can be used to predict the performance of the system. By extending this hypothesis, a protocol implementor can use the formal specification to derive the power consumption behaviour of the wireless system during a normal operation (transmission or reception of data). A high-level formalism like state-transition graphs, can be used to track the protocol processing behaviour and to derive the associated continuous-time Markov chains. Because of their diversity, Markov reward models(MRM) are used to model the power consumption associated with the different states of a specified protocol layer. The models are solved analytically using the Mobius performance and dependability tool. Using the MRM accumulation and utilization measures, a profile of the power consumption is generated. Results from the experiments on the protocol layers show the individual power consumption and utilization of the different states as well as the accumulated power consumption of different protocol layers when compared. Ultimately, the results from the reward model solution can be used in the energy-efficient design of wireless communication systems. Lastly, in order to get an idea of how wireless communication device companies handle issues of power consumption, we consulted with the wireless module engineers at Siemens Communication South Africa and present our findings on current practices in energy efficient protocol implementation.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Linyu. "Economic growth in Sweden, 2000-2010 : The dot-com bubble and the financial crisis." Thesis, Högskolan Dalarna, Nationalekonomi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:du-14883.

Full text
Abstract:
Economic growth is the increase in the inflation-adjusted market value of the goods and services produced by an economy over time. The total output is the quantity of goods or servicesproduced in a given time period within a country. Sweden was affected by two crises during the period 2000-2010: a dot-com bubble and a financial crisis. How did these two crises affect the economic growth?     The changes of domestic output can be separated into four parts: changes in intermediate demand, final domestic demand, export demand and import substitution. The main purpose of this article is to analyze the economic growth during the period 2000-2010, with focus on the dot-com bubble in the beginning of the period 2000-2005, and the financial crisis at the end of the period 2005-2010. The methodology to be used is the structural decomposition method.     This investigation shows that the main contributions to the Swedish total domestic output increase in both the period 2000-2005 and the period 2005-2010 were the effect of domestic demand. In the period 2005-2010, financial crisis weakened the effect of export. The output of the primary sector went from a negative change into a positive, explained mainly by strong export expansion. In the secondary sector, export had most effect in the period 2000-2005. Nevertheless, domestic demand and import ratio had more effect during the financial crisis period. Lastly, in the tertiary sector, domestic demand can mainly explain the output growth in the whole period 2000-2010.
APA, Harvard, Vancouver, ISO, and other styles
25

Thanawala, Rajiv P. "Development of G-net (a software system for graph theory & algorithms) with special emphasis on graph rendering on raster output devices." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834618.

Full text
Abstract:
In this thesis we will describe the development of software functions that render graphical and textual information of G-Net(A software system for graph theory & algorithms) onto various raster output devices.Graphs are mathematical structures that are used to model very diverse systems such as networks, VLSI design, chemical compounds and many other systems where relations between objects play an important role. The study of graph theory problems requires many manipulative techniques. A software system (such as G-Net) that can automate these techniques will be a very good aid to graph theorists and professionals. The project G-Net, headed by Prof. Kunwarjit S. Bagga of the computer science department has the goal of developing a software system having three main functions. These are: learning basics of graph theory, drawing/manipulating graphs and executing graph algorithms.The thesis will begin with an introduction to graph theory followed by a brief description of the evolution of the G-Net system and its current status. To print on various printers, the G-Net system translates all the printable information into PostScript' files. A major part of this thesis concentrates on this translation. To begin with, the necessity of a standard format for the printable information is discussed. The choice of PostScript as a standard is then justified. Next,the design issues of translator and the translation algorithm are discussed in detail. The translation process for each category of printable information is explained. Issues of printing these PostScript files onto different printers are dealt with at the end.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
26

Amer, Taher. "Evaluating Swiftpoint as a Mobile Device for Direct Manipulation Input." Thesis, University of Canterbury. Computer Science and Software Engineering, 2006. http://hdl.handle.net/10092/1123.

Full text
Abstract:
Swiftpoint is a promising new computer pointing device that is designed primarily for mobile computer users in constrained space. Swiftpoint has many advantages over current pointing devices: it is small, ergonomic, has a digital ink mode, and can be used over a flat keyboard. This thesis aids the development of Swiftpoint by formally evaluating it against two of the most common pointing devices with today's mobile computers: the touchpad, and mouse. Two laws commonly used with pointing devices evaluations, Fitts' Law and the Steering Law, were used to evaluate Swiftpoint. Results showed that Swiftpoint was faster and more accurate than the touchpad. The performance of the mouse was however, superior to both the touchpad and Swiftpoint. Experimental results were reflected in participants' choice for the mouse as their preferred pointing device. However, some participants indicated that their choice was based on their familiarity with the mouse. None of the participants chose the touchpad as their preferred device.
APA, Harvard, Vancouver, ISO, and other styles
27

Öman, Andreas. "CO2-utsläpp och konsumtion : Förutsättningar för att påvisa och minska indirekta CO2-utsläpp i den enskilde individens konsumtion av varor." Thesis, Linköping University, Department of Water and Environmental Studies, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11308.

Full text
Abstract:

IVL Svenska Miljöinstitutet utvecklade år 2001 – 2002 i projektet “Klimat.nu – Den stora miljöutmaningen”, ett webbaserat verktyg för att upplysa och vägleda individen i klimatfrågan. Verktygets syfte är att kvantifiera fossila koldioxidutsläpp som en konsekvens av individens energikonsumtion; hushållsel, drivmedel m.m. Syftet är även att individen ges råd om hur man minskar CO2-utsläpp genom att förändra sitt leverne. IVL Svenska Miljöinstitutet vill utöka beräkningsverktygets innehåll till att även omfatta konsumentvaror. Studien har sökt svar hur enskilda individers olika typer av varukonsumtion sättas i samband med CO2-utsläpp och vilka konsumentråd är rimliga att ge för att uppnå utsläppsminskningar, samt hur dessa minskningar kan kvantifieras.

I studien har ett systemanalytiskt tillvägagångssätt tillämpats och empirin har bestått av miljöexpanderad input-output-data (MIOA). Data har samlats in från Statistiska Centralbyrån (SCB) på miljöräkenskapernas data- och analyssidor. Insamlad data beskriver utsläpp som sker i varors livscykel till och med distribution till affär (indirekta utsläpp). Det är dock viktigt att ha varuklassers hela livscykel i åtanke så att försök till att minska konsumentens indirekta CO2-utsläpp inte leder till ökade totala utsläpp. Dataosäkerheter har identifierats, vilka visar att insamlad data underskattar varors indirekta CO2-utsläpp. Data grundar sig på antagandet att Sverige skulle ha producerat alla varor som importeras. I genomsnitt är ca 69 % av varors indirekta CO2-utsläpp av utländsk härkomst, dessa länder har vanligtvis högre utsläppsintensitet än Sverige i sina produktionsstrukturer. I Sverige finns data endast tillgänglig med ca tre års fördröjning. I sin nuvarande form representerar data trots osäkerheter en lägsta nivå på olika varuklassers indirekta CO2-utsläpp.

För att göra insamlad data funktionell i beräkningsverktyget prövades en metodik där utsläppsintensiteter beräknades. Utsläppsintensiteter tillgodoser kravet för att enskilda individers olika typer av varukonsumtion ska kunna kopplas till dess CO2-utsläpp. I beräkningsverktyget innebär det att utsläppsintensiteter integreras, som tillsammans med en viss summa pengar, utgör underlaget för att beräkna individens indirekta CO2-utsläpp. Ur ett individperspektiv är metodiken särskilt tilltalande eftersom pengar används som beräkningsenhet, enheten är något som individen oftast har lätt att relatera till. Användningen av utsläppsintensiteter möjliggör kvantifiering av en utsläppsminskning om individen spenderar en summa pengar på en varuklass med lägre utsläppsintensitet i stället en med högre. Med pengar som enhet kan även ”rebound-effekten” undvikas.

På grund av osäkerheter i dataunderlaget kan studien inte påvisa att förändrad konsumtion av varor leder till en faktisk utsläppsminskning. Störst sannolikhet att uppnå en faktisk minskning är dock om individen råds att fördela en summa pengar från en varuklass till en annan, i vilka det finns stora kvantitativa skillnader mellan utsläppsintensiteterna.

APA, Harvard, Vancouver, ISO, and other styles
28

Öman, Andreas. "CO2-utsläpp och konsumtion : Förutsättningar för att påvisa och minska indirekta CO2-utsläpp i den enskilde individens konsumtion av varor." Thesis, Linköpings universitet, Tema vatten i natur och samhälle, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11308.

Full text
Abstract:
IVL Svenska Miljöinstitutet utvecklade år 2001 – 2002 i projektet “Klimat.nu – Den stora miljöutmaningen”, ett webbaserat verktyg för att upplysa och vägleda individen i klimatfrågan. Verktygets syfte är att kvantifiera fossila koldioxidutsläpp som en konsekvens av individens energikonsumtion; hushållsel, drivmedel m.m. Syftet är även att individen ges råd om hur man minskar CO2-utsläpp genom att förändra sitt leverne. IVL Svenska Miljöinstitutet vill utöka beräkningsverktygets innehåll till att även omfatta konsumentvaror. Studien har sökt svar hur enskilda individers olika typer av varukonsumtion sättas i samband med CO2-utsläpp och vilka konsumentråd är rimliga att ge för att uppnå utsläppsminskningar, samt hur dessa minskningar kan kvantifieras. I studien har ett systemanalytiskt tillvägagångssätt tillämpats och empirin har bestått av miljöexpanderad input-output-data (MIOA). Data har samlats in från Statistiska Centralbyrån (SCB) på miljöräkenskapernas data- och analyssidor. Insamlad data beskriver utsläpp som sker i varors livscykel till och med distribution till affär (indirekta utsläpp). Det är dock viktigt att ha varuklassers hela livscykel i åtanke så att försök till att minska konsumentens indirekta CO2-utsläpp inte leder till ökade totala utsläpp. Dataosäkerheter har identifierats, vilka visar att insamlad data underskattar varors indirekta CO2-utsläpp. Data grundar sig på antagandet att Sverige skulle ha producerat alla varor som importeras. I genomsnitt är ca 69 % av varors indirekta CO2-utsläpp av utländsk härkomst, dessa länder har vanligtvis högre utsläppsintensitet än Sverige i sina produktionsstrukturer. I Sverige finns data endast tillgänglig med ca tre års fördröjning. I sin nuvarande form representerar data trots osäkerheter en lägsta nivå på olika varuklassers indirekta CO2-utsläpp. För att göra insamlad data funktionell i beräkningsverktyget prövades en metodik där utsläppsintensiteter beräknades. Utsläppsintensiteter tillgodoser kravet för att enskilda individers olika typer av varukonsumtion ska kunna kopplas till dess CO2-utsläpp. I beräkningsverktyget innebär det att utsläppsintensiteter integreras, som tillsammans med en viss summa pengar, utgör underlaget för att beräkna individens indirekta CO2-utsläpp. Ur ett individperspektiv är metodiken särskilt tilltalande eftersom pengar används som beräkningsenhet, enheten är något som individen oftast har lätt att relatera till. Användningen av utsläppsintensiteter möjliggör kvantifiering av en utsläppsminskning om individen spenderar en summa pengar på en varuklass med lägre utsläppsintensitet i stället en med högre. Med pengar som enhet kan även ”rebound-effekten” undvikas. På grund av osäkerheter i dataunderlaget kan studien inte påvisa att förändrad konsumtion av varor leder till en faktisk utsläppsminskning. Störst sannolikhet att uppnå en faktisk minskning är dock om individen råds att fördela en summa pengar från en varuklass till en annan, i vilka det finns stora kvantitativa skillnader mellan utsläppsintensiteterna.
APA, Harvard, Vancouver, ISO, and other styles
29

Olwert, Craig Thomas. "A Computable General Equilibrium Model of the City with Optimization of its Transportation Network: Impacts of Changes in Technology, Preferences, and Policy." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1269369926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cho, Hyunkyoo. "Efficient variable screening method and confidence-based method for reliability-based design optimization." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/4594.

Full text
Abstract:
The objectives of this study are (1) to develop an efficient variable screening method for reliability-based design optimization (RBDO) and (2) to develop a new RBDO method incorporated with the confidence level for limited input data problems. The current research effort involves: (1) development of a partial output variance concept for variable screening; (2) development of an effective variable screening sequence; (3) development of estimation method for a confidence level of a reliability output; and (4) development of a design sensitivity method for the confidence level. In the RBDO process, surrogate models are frequently used to reduce the number of simulations because analysis of a simulation model takes a great deal of computational time. On the other hand, to obtain accurate surrogate models, we have to limit the dimension of the RBDO problem and thus mitigate the curse of dimensionality. Therefore, it is desirable to develop an efficient and effective variable screening method for reduction of the dimension of the RBDO problem. In this study, it is found that output variance is critical for identifying important variables in the RBDO process. A partial output variance, which is an efficient approximation method based on the univariate dimension reduction method (DRM), is proposed to calculate output variance efficiently. For variable screening, the variables that has larger partial output variances are selected as important variables. To determine important variables, hypothesis testing is used so that possible errors are contained at a user-specified error level. Also, an appropriate number of samples is proposed for calculating the partial output variance. Moreover, a quadratic interpolation method is studied in detail to calculate output variance efficiently. Using numerical examples, performance of the proposed variable screening method is verified. It is shown that the proposed method finds important variables efficiently and effectively. The reliability analysis and the RBDO require an exact input probabilistic model to obtain accurate reliability output and RBDO optimum design. However, often only limited input data are available to generate the input probabilistic model in practical engineering problems. The insufficient input data induces uncertainty in the input probabilistic model, and this uncertainty forces the RBDO optimum to lose its confidence level. Therefore, it is necessary to consider the reliability output, which is defined as the probability of failure, to follow a probability distribution. The probability of the reliability output is obtained with consecutive conditional probabilities of input distribution type and parameters using the Bayesian approach. The approximate conditional probabilities are obtained under reasonable assumptions, and Monte Carlo simulation is applied to practically calculate the probability of the reliability output. A confidence-based RBDO (C-RBDO) problem is formulated using the derived probability of the reliability output. In the C-RBDO formulation, the probabilistic constraint is modified to include both the target reliability output and the target confidence level. Finally, the design sensitivity of the confidence level, which is the new probabilistic constraint, is derived to support an efficient optimization process. Using numerical examples, the accuracy of the developed design sensitivity is verified and it is confirmed that C-RBDO optimum designs incorporate appropriate conservativeness according to the given input data.
APA, Harvard, Vancouver, ISO, and other styles
31

Holz, Christian. "3D from 2D touch." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6779/.

Full text
Abstract:
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
Die Interaktion mit Computern war in den letzten vierzig Jahren stark von Tastatur und Maus geprägt. Neue Arten von Sensoren ermöglichen Computern nun, Eingaben durch Berührungs-, Sprach- oder 3D-Gestensensoren zu erkennen. Solch neuartige Formen der Interaktion werden häufig unter dem Begriff "natürliche Benutzungsschnittstellen" bzw. "NUIs" (englisch natural user interfaces) zusammengefasst. 2D-NUIs ist vor allem auf Mobilgeräten ein Durchbruch gelungen; über eine Milliarde solcher Geräte lassen sich durch Berührungseingaben bedienen. 3D-NUIs haben sich jedoch bisher nicht auf mobilen Plattformen durchsetzen können, da sie Nutzereingaben vorrangig mit Kameras aufzeichnen. Da Kameras Bilder jedoch erst ab einem gewissen Abstand auflösen können, eignen sie sich nicht als Sensor in einer mobilen Plattform. In dieser Arbeit lösen wir dieses Problem mit Hilfe von 2D-Sensoren, von deren Eingaben wir 3D-Informationen rekonstruieren. Unsere Prototypen zeichnen dabei die 2D-Abdrücke der Objekte, die den Sensor berühren, mit hoher Auflösung auf. Aus diesen Abdrücken leiten sie dann die Textur der Objekte ab. Anhand der Stelle der Objektoberfläche, die den Sensor berührt, rekonstruieren unsere Prototypen schließlich die 3D-Ausrichtung des jeweiligen Objektes. Neben unserem Hauptbeitrag der 3D-Rekonstruktion stellen wir drei Anwendungen unserer Methode vor. (1) Wir präsentieren Geräte, die Berührungseingaben dreimal genauer als existierende Geräte messen und damit Nutzern ermöglichen, dreimal kleinere Ziele zuverlässig mit dem Finger auszuwählen. Wir zeigen dabei, dass sowohl die Haltung des Fingers als auch der Benutzer selbst einen systematischen Einfluss auf die vom Sensor gemessene Position ausübt. Da existierende Geräte weder die Haltung des Fingers noch den Benutzer erkennen, nehmen sie solche Variationen als Eingabeungenauigkeit wahr. Wir stellen ein Modell für Berührungseingabe vor, das diese beiden Faktoren integriert, um damit die gemessenen Eingabepositionen zu präzisieren. Anschließend untersuchen wir, welches mentale Modell Nutzer beim Berühren kleiner Ziele mit dem Finger anwenden. Unsere Ergebnisse deuten auf ein visuelles Modell hin, demzufolge Benutzer Merkmale auf der Oberfläche ihres Fingers an einem Ziel ausrichten. Bei der Analyse von Berührungseingaben mit diesem Modell verschwinden nahezu alle zuvor von uns beobachteten systematischen Effekte. (2) Unsere Prototypen identifizieren Nutzer anhand der biometrischen Merkmale von Fingerabdrücken. Unser Prototyp Fiberio integriert dabei einen Fingerabdruckscanner und einen Bildschirm in die selbe Oberfläche und löst somit das seit Langem bestehende Problem der sicheren Authentifizierung auf Berührungsbildschirmen. Gemeinsam mit der 3D-Rekonstruktion von Eingaben ermöglicht diese Fähigkeit Fiberio, eine Reihe von Anwendungen zu implementieren, die bisher den sperrigen Aufbau aktueller 3D-NUI-Systeme voraussetzten. (3) Um die Flexibilität unserer Methode zu zeigen, implementieren wir sie auf einem großen, berührungsempfindlichen Fußboden, der Objekttexturen bei der Eingabe ebenfalls mit hoher Auflösung aufzeichnet. Ähnlich wie zuvor analysiert unser System GravitySpace diese Abdrücke, um Nutzer anhand ihrer Schuhsolen zu identifizieren, Möbelstücke auf dem Boden zu erkennen und Nutzern präzise Eingaben mittels ihrer Schuhe zu ermöglichen. Indem GravitySpace alle Abdrücke klassifiziert, erkennt das System die Körperteile der Benutzer, die sich in Kontakt mit dem Boden befinden. Aus der Anordnung dieser Kontakte schließt GravitySpace dann auf die Körperhaltungen aller Benutzer in 3D. GravitySpace hat daher das Potenzial, Anwendungen für zukünftige 3D-NUI-Systeme auf einer flachen Oberfläche zu implementieren, wie zum Beispiel in zukünftigen intelligenten Wohnungen. Wie schließen diese Arbeit mit einem Ausblick auf zukünftige interaktive Geräte. Dabei konzentrieren wir uns auf den Mobilitätsaspekt aktueller Entwicklungen und beleuchten, wie zukünftige mobile NUI-Geräte Nutzer in Form implantierter Geräte direkt unterstützen können.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Chih-Sung. "Designing tangible tabletop interactions to support the fitting process in modeling biological systems." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/50128.

Full text
Abstract:
This thesis aims to explore how to physically interact with computational models on an interactive tabletop display. The research began with the design and implementation of several prototype systems. The research of the prototype systems showed that tangible interactions on interactive tabletops have the potential to be more effective on some tasks than traditional interfaces that use screen displays, keyboards and mice. The prototype work shaped the research to focus on the effectiveness of adopting tangible interactions on interactive tabletops. To substantiate the thesis claims, this thesis develops an interactive tabletop application, Pathways, to support the fitting process in modeling biological systems. Pathways supports the concepts of Tangible User Interfaces (TUIs) and tabletop visualizations. It realizes real-time simulation of models and provides comparisons of simulation results with experimental data on the tabletop. It also visualizes the simulation of the model with animations. In addition to that, Pathways introduces a new visualization to help systems biologists quickly compare the simulation results. This thesis provides the quantitative and qualitative evaluation results of Pathways. The evidence showed that using tangible interactions to control numerical values is practical. The results also showed that in experimental conditions users achieved better fitting results and faster fitting results on Pathways than the control group, which used the systems biologists' current tools. The results further suggested that it is possible to recruit non-experts to perform the fitting tasks that are usually done by professional systems biologists.
APA, Harvard, Vancouver, ISO, and other styles
33

Giljum, Stefan, Hanspeter Wieland, Franz Stephan Lutter, Nina Eisenmenger, Heinz Schandl, and Anne Owen. "The impacts of data deviations between MRIO models on material footprints: A comparison of EXIOBASE, Eora, and ICIO." Wiley, 2019. http://dx.doi.org/10.1111/jiec.12833.

Full text
Abstract:
In various international policy processes such as the UN Sustainable Development Goals, an urgent demand for robust consumption-based indicators of material flows, or material footprints (MFs), has emerged over the past years. Yet, MFs for national economies diverge when calculated with different Global Multiregional Input-Output (GMRIO) databases, constituting a significant barrier to a broad policy uptake of these indicators. The objective of this paper is to quantify the impact of data deviations between GMRIO databases on the resulting MF. We use two methods, structural decomposition analysis and structural production layer decomposition, and apply them for a pairwise assessment of three GMRIO databases, EXIOBASE, Eora, and the OECD Inter-Country Input-Output (ICIO) database, using an identical set of material extensions. Although all three GMRIO databases accord for the directionality of footprint results, that is, whether a countries' final demand depends on net imports of raw materials from abroad or is a net exporter, they sometimes show significant differences in level and composition of material flows. Decomposing the effects from the Leontief matrices (economic structures), we observe that a few sectors at the very first stages of the supply chain, that is, raw material extraction and basic processing, explain 60% of the total deviations stemming from the technology matrices. We conclude that further development of methods to align results from GMRIOs, in particular for material-intensive sectors and supply chains, should be an important research priority. This will be vital to strengthen the uptake of demand-based material flow indicators in the resource policy context.
APA, Harvard, Vancouver, ISO, and other styles
34

Janse, van Rensburg HP. "Development of a digitising workstation for the electronics laboratory utilising the personal computer." Thesis, Cape Technikon, 1994. http://hdl.handle.net/20.500.11838/1081.

Full text
Abstract:
Thesis (Masters Diploma (Electrical Engineering)--Cape Technikon, Cape Town,1994
This thesis describes the design, development and implementation of a digitising workstation for the electronics laboratory that utilises the personal computer.
APA, Harvard, Vancouver, ISO, and other styles
35

Papageorgiou, Asterios. "A physical accounting model for monitoring material flows in urban areas with application to the Stockholm Royal Seaport district." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231160.

Full text
Abstract:
There is a plethora of methods and tools that can be used for the assessment of Urban Metabolism. Nevertheless, there is no standardized method for accounting of material flows within and across the boundaries of urban systems. This thesis aims to provide a physical accounting model for monitoring material flows in urban areas that could potentially become the basis for the development of a standardized accounting method in the long term. The model is based on a Physical Input Output Table framework and builds upon the strengths of existing accounting methods but at the same time it demonstrates new features that can address their limitations. The functions of the model were explored and evaluated through its application to an urban neighbourhood in the Stockholm Royal Seaport. Bottom-up data were used for the application of the model in the case study. The application of the model provided a preliminary description of the material flows in the neighbourhood and most importantly provided information that underpinned the assessment of the strengths and limitations of the model. It was deduced, that on the one hand the model can describe successfully the physical interactions between the urban socioeconomic system and the environment or other socioeconomic systems and at the same it has the potentials to illustrate the intersectoral flows within the boundaries of the system. In addition, it can be used to structure available data on material flows and promote the study of an urban system with a life cycle perspective. On the other hand, the process of compiling the tables of the model can be considered as complex and moreover the data requirements for the compilation of the tables are significant. Especially, the compilation of the tables of the model with bottom-up data may require a laborious data collection and analysis process, which however may not address all data gaps. Thus, the combination of bottom-up data with top-down data is recommended. Moreover, it is recommended the development of integrated databases for data collection and management at the municipal level and the fostering of collaboration between stakeholders within the municipalities to facilitate dissemination of data and information.
Mer än hälften av den globala befolkningen bor numera i urbana områden och denna andel uppskattas öka under de kommande årtiondena. Urbana system förbrukar fysiska resurser och genererar stora mängder av rester vilket innebär påfrestningar på miljön samt hindrar en hållbar utveckling. Således kan förståelse av Urban Metabolism (UM) stödja insatserna för att effektivisera resursförbrukningen och avfallshanteringen. I detta sammanhang har en stor mängd av metoder och verktyg utvecklats och tillämpats i UM-studier, såsom Materialflödeanalys (Material Flow Analysis - MFA) och Input-output Analys (Input Output - IOA) baserat på fysiska input-output tabeller (Input Output Tables – PIOTs). Ändå saknas en standardiserad metod för redovisning av materialflöden inom och över gränserna av urbana system. I samband med detta examensarbete utvecklades en fysisk räkenskapsmodell för övervakning av materialflöden i urbana områden. Denna modell kan potentiellt bli grunden för en enhetlig metod för beräkning av materialflöden i urbana system. Modellen utvecklades i en stegvis process och baserades på litteraturgranskning. Grunden för modellen är ett omfattande PIOT ramverk som kan användas för registrering av materialflöden i urbana system. PIOT ramverket är annorlunda än de typiska PIOT-systemen. Det ger en tydligare avgränsning av systemgränserna, det visar tydligt ursprung och destination för materialflöden, och dessutom kan det erbjuda ett livscykelperspektiv på materialflödena. Modellen består av en uppsättning identiska PIOT. Varje deltabell innehåller materialflöden som tillhör i en specifik klass, medan huvudtabellen aggregerar materialet som strömmar för alla material från deltabellerna. Därigenom kan modellen avbilda materialflödena i ett aggregat-perspektiv och samtidigt ge fysiska räkenskaper för specifika materialtyper. Modellen användes i en nybyggd stadsdel i Norra Djurgårdsstaden (NDS), för att utforska och bedöma dess funktioner. För att kartlägga och kvantifiera flödena i stadsdelen genomfördes en MFA baserad på “bottom-up-data”. Insamlingen och analysen av data var emellertid en besvärlig process och dessutom kunde flera materialflöden inte kvantifieras på grund av databrister. Därför kunde modellens tabeller inte fyllas fullständigt och ett flödesdiagram skapades med både kvantitativa och kvalitativa flöden. Trots att det fanns databrister lyckades tillämpningen av modellen att avbilda UM i det avgränsade urbana systemet på ett adekvat sätt. Det visade tydligt att nästan 96% av de materiella insatserna är ackumulerade i lager. Dessutom fastställde modellen kvalitativt den fysiska växelverkan mellan det urbana systemet och den naturliga miljön, det nationella socioekonomiska och det globala socioekonomiska systemet. Emellertid var det inte möjligt att bedöma modellens fullständiga potential eftersom det inte var möjligt att upprätta intersektorala kopplingar. Dessutom beräknades indirekta flöden av flera importerade material baserat på koefficienterna för materialintensitet. Detta tillvägagångssätt kan erbjuda insikt om de uppströms påfrestningar som orsakas av materialproduktionen. Dock finns det endast koefficienter för specifika material. Därför kan de inte användas för att uppskatta de indirekta flödena för varje materialinflöde. Dock framhöll deras partiella tillämpning att indirekta flödena var 38% högre än direktflödena, vilket indikerar att påfrestningar som utövas till miljön på grund av produktion av importerade material är betydande. Tillämpningen av modellen möjliggjorde en bedömning av både styrkor och svagheter hos modellen. Å ena sidan kan modellen fastställa fysiska interaktioner mellan det urbana socioekonomiska systemet och naturmiljön, det nationella socioekonomiska systemet och det globala socioekonomiska systemet. Dessutom har det potential att beskriva intersektorala flöden inom gränserna för det urbana systemet och det kan erbjuda insikt om materialinflödenas ursprung och materialutflödenas destination. En annan styrka i modellen är att den erbjuder livscykelperspektiv genom att ta hänsyn till indirekta flöden av importerade material. Å andra sidan demonstrerades att sammanställningenav modellens tabeller kräver en stor mängd data, speciellt när data erhålls med ett ”bottom-up” tillvägagångssätt. Ändå är bottom-up data inte alltid tillgängliga för urbana områden. En annan svaghet är att sammanställningenav tabellerna i modellen med bottom-up-data kräver en mödosam process för datainsamling och analys. Dessutom kräver analysen av data många antaganden som ökar osäkerheten i resultaten. Ovanstående svagheter i modellen kan hindra tillämpningen av modellen för räkenskap av materialflöden på urbana områden. Således rekommenderas kombinationen av bottom-up-data med top-down data för tillämpning av modellen. Dessutom föreslås utvecklingen av integrerade databaser för datainsamling om materialflöden i urbana områden.
APA, Harvard, Vancouver, ISO, and other styles
36

Ryd, Jonatan, and Jeffrey Persson. "Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method." Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296952.

Full text
Abstract:
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware.
Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
APA, Harvard, Vancouver, ISO, and other styles
37

Moon, Min-Yeong. "Confidence-based model validation for reliability assessment and its integration with reliability-based design optimization." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5816.

Full text
Abstract:
Conventional reliability analysis methods assume that a simulation model is able to represent the real physics accurately. However, this assumption may not always hold as the simulation model could be biased due to simplifications and idealizations. Simulation models are approximate mathematical representations of real-world systems and thus cannot exactly imitate the real-world systems. The accuracy of a simulation model is especially critical when it is used for the reliability calculation. Therefore, a simulation model should be validated using prototype testing results for reliability analysis. However, in practical engineering situation, experimental output data for the purpose of model validation is limited due to the significant cost of a large number of physical testing. Thus, the model validation needs to be carried out to account for the uncertainty induced by insufficient experimental output data as well as the inherent variability existing in the physical system and hence in the experimental test results. Therefore, in this study, a confidence-based model validation method that captures the variability and the uncertainty, and that corrects model bias at a user-specified target confidence level, has been developed. Reliability assessment using the confidence-based model validation can provide conservative estimation of the reliability of a system with confidence when only insufficient experimental output data are available. Without confidence-based model validation, the designed product obtained using the conventional reliability-based design optimization (RBDO) optimum could either not satisfy the target reliability or be overly conservative. Therefore, simulation model validation is necessary to obtain a reliable optimum product using the RBDO process. In this study, the developed confidence-based model validation is integrated in the RBDO process to provide truly confident RBDO optimum design. The developed confidence-based model validation will provide a conservative RBDO optimum design at the target confidence level. However, it is challenging to obtain steady convergence in the RBDO process with confidence-based model validation because the feasible domain changes as the design moves (i.e., a moving-target problem). To resolve this issue, a practical optimization procedure, which terminates the RBDO process once the target reliability is satisfied, is proposed. In addition, the efficiency is achieved by carrying out deterministic design optimization (DDO) and RBDO without model validation, followed by RBDO with the confidence-based model validation. Numerical examples are presented to demonstrate that the proposed RBDO approach obtains a conservative and practical optimum design that satisfies the target reliability of designed product given a limited number of experimental output data. Thus far, while the simulation model might be biased, it is assumed that we have correct distribution models for input variables and parameters. However, in real practical applications, only limited numbers of test data are available (parameter uncertainty) for modeling input distributions of material properties, manufacturing tolerances, operational loads, etc. Also, as before, only a limited number of output test data is used. Therefore, a reliability needs to be estimated by considering parameter uncertainty as well as biased simulation model. Computational methods and a process are developed to obtain confidence-based reliability assessment. The insufficient input and output test data induce uncertainties in input distribution models and output distributions, respectively. These uncertainties, which arise from lack of knowledge – the insufficient test data, are different from the inherent input distributions and corresponding output variabilities, which are natural randomness of the physical system.
APA, Harvard, Vancouver, ISO, and other styles
38

Jensen, Deron Eugene. "System-wide Performance Analysis for Virtualization." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1789.

Full text
Abstract:
With the current trend in cloud computing and virtualization, more organizations are moving their systems from a physical host to a virtual server. Although this can significantly reduce hardware, power, and administration costs, it can increase the cost of analyzing performance problems. With virtualization, there is an initial performance overhead, and as more virtual machines are added to a physical host the interference increases between various guest machines. When this interference occurs, a virtualized guest application may not perform as expected. There is little or no information to the virtual OS about the interference, and the current performance tools in the guest are unable to show this interference. We examine the interference that has been shown in previous research, and relate that to existing tools and research in root cause analysis. We show that in virtualization there are additional layers which need to be analyzed, and design a framework to determine if degradation is occurring from an external virtualization layer. Additionally, we build a virtualization test suite with Xen and PostgreSQL and run multiple tests to create I/O interference. We show that our method can distinguish between a problem caused by interference from external systems and a problem from within the virtual guest.
APA, Harvard, Vancouver, ISO, and other styles
39

Johnson, Andrew. "Methods in productivity and efficiency analysis with applications to warehousing." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/29400.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2006.
McGinnis, Leon - Committee Chair, Griffin, Paul - Committee Member, Hackman, Steve - Committee Member, Parsons, Len - Committee Member, Sharp, Gunter - Committee Member. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
40

Nimgaonkar, Satyajeet. "Secure and Energy Efficient Execution Frameworks Using Virtualization and Light-weight Cryptographic Components." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699986/.

Full text
Abstract:
Security is a primary concern in this era of pervasive computing. Hardware based security mechanisms facilitate the construction of trustworthy secure systems; however, existing hardware security approaches require modifications to the micro-architecture of the processor and such changes are extremely time consuming and expensive to test and implement. Additionally, they incorporate cryptographic security mechanisms that are computationally intensive and account for excessive energy consumption, which significantly degrades the performance of the system. In this dissertation, I explore the domain of hardware based security approaches with an objective to overcome the issues that impede their usability. I have proposed viable solutions to successfully test and implement hardware security mechanisms in real world computing systems. Moreover, with an emphasis on cryptographic memory integrity verification technique and embedded systems as the target application, I have presented energy efficient architectures that considerably reduce the energy consumption of the security mechanisms, thereby improving the performance of the system. The detailed simulation results show that the average energy savings are in the range of 36% to 99% during the memory integrity verification phase, whereas the total power savings of the entire embedded processor are approximately 57%.
APA, Harvard, Vancouver, ISO, and other styles
41

Mota, Susana de Jesus. "Channel modelling for MIMO systems." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14961.

Full text
Abstract:
Doutoramento em Engenharia Electrotécnica
Systems equipped with multiple antennas at the transmitter and at the receiver, known as MIMO (Multiple Input Multiple Output) systems, offer higher capacities, allowing an efficient exploitation of the available spectrum and/or the employment of more demanding applications. It is well known that the radio channel is characterized by multipath propagation, a phenomenon deemed problematic and whose mitigation has been achieved through techniques such as diversity, beamforming or adaptive antennas. By exploring conveniently the spatial domain MIMO systems turn the characteristics of the multipath channel into an advantage and allow creating multiple parallel and independent virtual channels. However, the achievable benefits are constrained by the propagation channel’s characteristics, which may not always be ideal. This work focuses on the characterization of the MIMO radio channel. It begins with the presentation of the fundamental results from information theory that triggered the interest on these systems, including the discussion of some of their potential benefits and a review of the existing channel models for MIMO systems. The characterization of the MIMO channel developed in this work is based on experimental measurements of the double-directional channel. The measurement system is based on a vector network analyzer and a two-dimensional positioning platform, both controlled by a computer, allowing the measurement of the channel’s frequency response at the locations of a synthetic array. Data is then processed using the SAGE (Space-Alternating Expectation-Maximization) algorithm to obtain the parameters (delay, direction of arrival and complex amplitude) of the channel’s most relevant multipath components. Afterwards, using a clustering algorithm these data are grouped into clusters. Finally, statistical information is extracted allowing the characterization of the channel’s multipath components. The information about the multipath characteristics of the channel, induced by existing scatterers in the propagation scenario, enables the characterization of MIMO channel and thus to evaluate its performance. The method was finally validated using MIMO measurements.
Os sistemas equipados com múltiplas antenas no emissor e no recetor, conhecidos como sistemas MIMO (Multiple Input Multiple Output), oferecem capacidades mais elevadas, permitindo melhor rentabilização do espectro e/ou utilização de aplicações mais exigentes. É sobejamente sabido que o canal rádio é caracterizado por propagação multipercurso, fenómeno considerado problemático e cuja mitigação tem sido conseguida através de técnicas como diversidade, formatação de feixe ou antenas adaptativas. Explorando convenientemente o domínio espacial os sistemas MIMO transformam as características multipercurso do canal numa mais-valia e permitem criar vários canais virtuais, paralelos e independentes. Contudo, os benefícios atingíveis são condicionados pelas características do canal de propagação, que poderão não ser sempre as ideais. Este trabalho centra-se na caracterização do canal rádio para sistemas MIMO. Inicia-se com a apresentação dos resultados fundamentais da teoria da informação que despoletaram todo o entusiamo em torno deste tipo de sistemas, sendo discutidas algumas das suas potencialidades e uma revisão dos modelos existentes para sistemas MIMO. A caracterização do canal MIMO desenvolvida neste trabalho assenta em medidas experimentais do canal direcional adquiridas em dupla via. O sistema de medida é baseado num analisador de redes vetorial e numa plataforma de posicionamento bidimensional, ambos controlados por um computador, permitindo obter a resposta em frequência do canal rádio nos vários pontos correspondentes à localização dos elementos de um agregado virtual. As medidas são posteriormente processadas com o algoritmo SAGE (Space-Alternating Expectation-Maximization), de forma a obter os parâmetros (atraso, direção de chegada e amplitude complexa) das componentes multipercurso mais significativas. Seguidamente, estes dados são tratados com um algoritmo de classificação (clustering) e organizados em grupos. Finalmente é extraída informação estatística que permite caracterizar o comportamento das componentes multipercurso do canal. A informação acerca das características multipercurso do canal, induzidas pelos espalhadores (scatterers) existentes no cenário de propagação, possibilita a caracterização do canal MIMO e assim avaliar o seu desempenho. O método foi por fim validado com medidas MIMO.
APA, Harvard, Vancouver, ISO, and other styles
42

Verlaine, Lionel. "Optimisation des requêtes dans une machine bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066532.

Full text
Abstract:
CCette thèse propose des solutions optimisant l'évaluation de questions et la jointure. Ces propositions sont étudiées et mises en œuvre à partir du SGBD Sabrina issu du projet SABRE sur matériel Carrousel à la SAGEM. L'évaluation de questions permet d'optimiser le niveau logique du traitement d'une requête. La décomposition la plus pertinente est établie en fonction d'heuristiques simples. L'algorithme de jointure propose utilise des mécanismes minimisant à la fois le nombre d'entrées/sorties disque et le nombre de comparaisons. Il admet un temps d'exécution proportionnel au nombre de tuples. L'ordonnancement de jointures est résolu par un algorithme original de jointure multi-relations et par une méthode d'ordonnancement associée permettant un haut degré de parallélisme.
APA, Harvard, Vancouver, ISO, and other styles
43

Amri, Mohamed. "Etude et realisation d'un reseau local a insertion de registre : traitement des interblocages et determinisme." Clermont-Ferrand 2, 1987. http://www.theses.fr/1987CLF21062.

Full text
Abstract:
L'etude presente deux objectifs essentiels: definir un reseau garantissant le determinisme d'acces a la voie et d'etablissement d'une liaison de donnees entre deux stations et presenter a des stations heterogenes une interface unique d'acces au reseau. En associant une seule liaison physique a une communication, il s'est agi de definir un algorithme distribue operant sur sa propre liste d'attente et permettant l'etablissement d'une communication dans un temps borne et d'evaluer une borne superieure finie par le delai d'etablissement d'une liaison. L'algorithme presente les caracteristiques suivantes: controle entierement reparti (les communications sont etablies en respectant l'anciennete des requetes), equite d'etablissement de liaisons de donnees, absence d'interblocage
APA, Harvard, Vancouver, ISO, and other styles
44

Harrison, William. "Malleability, obliviousness and aspects for broadcast service attachment." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4138/.

Full text
Abstract:
An important characteristic of Service-Oriented Architectures is that clients do not depend on the service implementation's internal assignment of methods to objects. It is perhaps the most important technical characteristic that differentiates them from more common object-oriented solutions. This characteristic makes clients and services malleable, allowing them to be rearranged at run-time as circumstances change. That improvement in malleability is impaired by requiring clients to direct service requests to particular services. Ideally, the clients are totally oblivious to the service structure, as they are to aspect structure in aspect-oriented software. Removing knowledge of a method implementation's location, whether in object or service, requires re-defining the boundary line between programming language and middleware, making clearer specification of dependence on protocols, and bringing the transaction-like concept of failure scopes into language semantics as well. This paper explores consequences and advantages of a transition from object-request brokering to service-request brokering, including the potential to improve our ability to write more parallel software.
APA, Harvard, Vancouver, ISO, and other styles
45

Palix, Nicolas, Julia L. Lawall, Gaël Thomas, and Gilles Muller. "How Often do Experts Make Mistakes?" Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4132/.

Full text
Abstract:
Large open-source software projects involve developers with a wide variety of backgrounds and expertise. Such software projects furthermore include many internal APIs that developers must understand and use properly. According to the intended purpose of these APIs, they are more or less frequently used, and used by developers with more or less expertise. In this paper, we study the impact of usage patterns and developer expertise on the rate of defects occurring in the use of internal APIs. For this preliminary study, we focus on memory management APIs in the Linux kernel, as the use of these has been shown to be highly error prone in previous work. We study defect rates and developer expertise, to consider e.g., whether widely used APIs are more defect prone because they are used by less experienced developers, or whether defects in widely used APIs are more likely to be fixed.
APA, Harvard, Vancouver, ISO, and other styles
46

CARVALHO, Gustavo Henrique Porto de. "NAT2TEST: generating test cases from natural language requirements based on CSP." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17929.

Full text
Abstract:
Submitted by Natalia de Souza Gonçalves (natalia.goncalves@ufpe.br) on 2016-09-28T12:33:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5)
Made available in DSpace on 2016-09-28T12:33:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5) Previous issue date: 2016-02-26
High trustworthiness levels are usually required when developing critical systems, and model based testing (MBT) techniques play an important role generating test cases from specification models. Concerning critical systems, these models are usually created using formal or semi-formal notations. Moreover, it is also desired to clearly and formally state the conditions necessary to guarantee that an implementation is correct with respect to its specification by means of a conformance relation, which can be used to prove that the test generation strategy is sound. Despite the benefits of MBT, those who are not familiar with the models syntax and semantics may be reluctant to adopt these formalisms. Furthermore, most of these models are not available in the very beginning of the project, when usually natural-language requirements are available. Therefore, the use of MBT is postponed. Here, we propose an MBT strategy for generating test cases from controlled naturallanguage (CNL) requirements: NAT2TEST, which refrains the user from knowing the syntax and semantics of the underlying notations, besides allowing early use of MBT via naturallanguage processing techniques; the formal and semi-formal models internally used by our strategy are automatically generated from the natural-language requirements. Our approach is tailored to data-flow reactive systems: a class of embedded systems whose inputs and outputs are always available as signals. These systems can also have timed-based behaviour, which may be discrete or continuous. The NAT2TEST strategy comprises a number of phases. Initially, the requirements are syntactically analysed according to a CNL we proposed to describe data-flow reactive systems. Then, the requirements informal semantics are characterised based on the case grammar theory. Afterwards, we derive a formal representation of the requirements considering a model of dataflow reactive systems we defined. Finally, this formal model is translated into communicating sequential processes (CSP) to provide means for generating test cases. We prove that our test generation strategy is sound with respect to our timed input-output conformance relation based on CSP: csptio. Besides CSP, we explore the generation of other target notations (SCR and IMR) from which we can generate test cases using commercial tools (T-VEC and RT-Tester, respectively). The whole process is fully automated by the NAT2TEST tool. Our strategy was evaluated considering examples from the literature, the aerospace (Embraer) and the automotive (Mercedes) industry. We analysed performance and the ability to detect defects generated via mutation. In general, our strategy outperformed the considered baseline: random testing. We also compared our strategy with relevant commercial tools.
Testes baseados em modelos (MBT) consiste em criar modelos para especificar o comportamento esperado de sistemas e, a partir destes, gerar testes que verificam se implementações possuem o nível de confiabilidade esperado. No contexto de sistemas críticos, estes modelos são normalmente (semi)formais e deseja-se uma definição precisa das condições necessárias para garantir que uma implementação é correta em relação ao modelo da especificação. Esta definição caracteriza uma relação de conformidade, que pode ser usada para provar que uma estratégia de MBT é consistente (sound). Apesar dos benefícios, aqueles sem familiaridade com a sintaxe e a semântica dos modelos empregados podem relutar em adotar estes formalismos. Aqui, propõe-se uma estratégia de MBT para gerar casos de teste a partir de linguagem natural controlada (CNL). Esta estratégia (NAT2TEST) dispensa a necessidade de conhecer a sintaxe e a semântica das notações formais utilizadas internamente, uma vez que os modelos intermediários são gerados automaticamente a partir de requisitos em linguagem natural. Esta estratégia é apropriada para sistemas reativos baseados em fluxos de dados: uma classe de sistemas embarcados cujas entradas e saídas estão sempre disponíveis como sinais. Estes sistemas também podem ter comportamento dependente do tempo (discreto ou contínuo). Na estratégia NAT2TEST, inicialmente, os requisitos são analisados sintaticamente de acordo com a CNL proposta neste trabalho para descrever sistemas reativos. Em seguida, a semântica informal dos requisitos é caracterizada utilizando a teoria de gramática de casos. Posteriormente, deriva-se uma representação formal dos requisitos considerando um modelo definido neste trabalho para sistemas reativos. Finalmente, este modelo é traduzido em uma especificação em communicating sequential processes (CSP) para permitir a geração de testes. Este trabalho prova que a estratégia de testes proposta é consistente considerando a relação de conformidade temporal baseada em entradas e saídas também definida aqui: csptio. Além de CSP, foi explorada a geração de outras notações formais (SCR e IMR), a partir das quais é possível gerar casos de teste usando ferramentas comerciais (T-VEC e RT-Tester, respectivamente). Todo o processo é automatizado pela ferramenta NAT2TEST. A estratégia NAT2TEST foi avaliada considerando exemplos da literatura, da indústria aeroespacial (Embraer) e da automotiva (Mercedes). Foram analisados o desempenho e a capacidade de detectar defeitos gerados através de operadores de mutação. Em geral, a estratégia NAT2TEST apresentou melhores resultados do que a referência adotada: testes aleatórios. A estratégia NAT2TEST também foi comparada com ferramentas comerciais relevantes.
APA, Harvard, Vancouver, ISO, and other styles
47

Fan, Yang, Hidehiko Masuhara, Tomoyuki Aotani, Flemming Nielson, and Hanne Riis Nielson. "AspectKE*: Security aspects with program analysis for distributed systems." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4136/.

Full text
Abstract:
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
APA, Harvard, Vancouver, ISO, and other styles
48

Shahin, Kamrul. "Modèle graphique probabiliste appliqué au diagnostic de l'état de santé des systèmes, au pronostic et à l'estimation de la durée de vie résiduelle." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0129.

Full text
Abstract:
Cette thèse contribue au développement des recherches dans le domaine du Pronostic et Health Management : gestion de l’état de santé des systèmes complexes. Dans un contexte de management opérationnel et de sûreté de fonctionnement des systèmes, nous proposons d’étudier comment la modélisation par un Modèle Graphique Probabiliste Dynamique (MGPD) permet le diagnostic de l’état de santé courant d’un système, le pronostic de cet état et de l’évolution des dégradations, ainsi que l’estimation de sa durée de vie résiduelle en fonction de ses conditions opérationnelles. La dégradation des composants est en général inconnue et nécessite un arrêt du système pour être observée. Cependant, cela est difficile, voire impossible, durant l’exploitation du système. Néanmoins, un ensemble de grandeurs observables sur le système ou le composant peut caractériser le niveau de dégradation et faciliter l’estimation de la durée de vie résiduelle du composant et du système. Les MGPD offrent une approche adaptée à la modélisation de l’évolution de l’état de santé des systèmes et des composants. Nous étendons la modélisation classique des modèles de la famille des HMM vers les IOHMM pour permettre une propagation temporelle de l’incertitude afin de résoudre le problème de pronostic de l’état de santé et de l’estimation de la durée de vie résiduelle. Cette recherche comprend l’extension des algorithmes d’apprentissage et d’inférence appliqués aussi bien dans le cas d’un composant que pour un système structuré. Cette thèse a pour but de contribuer à lever les verrous scientifiques suivants : - Considérer l'état de santé du système par un modèle stochastique et apprendre les paramètres du modèle à partir des mesures disponibles sur le système. - Établir un diagnostic de l’état de santé du système et le pronostic de son évolution en intégrant plusieurs conditions opérationnelles. - Estimer la durée de vie résiduelle des composants et des systèmes structurés (série, parallèle) à partir de ses composants. L’enjeu est majeur, car le pronostic de la dégradation des composants du système permet de définir des stratégies soit de pilotage soit de maintenance par rapport à la durée de vie résiduelle du système. Cela permet la réduction de la probabilité d’occurrence d’un arrêt pour cause de dysfonctionnement du système, soit en ajustant la vitesse de dégradation pour s’accorder à un plan de maintenance préventif, soit en planifiant les interventions de maintenance de manière proactive
This thesis contributes to prognosis and health management for assessing health condition of complex systems. In the context of operational management and operational safety of systems, we propose to investigate how Dynamic Probabilistic Graphical Modelling (DPGM) can be used to diagnose the current health state of systems, prognostic the future health state, and the evolution of degradation, as well as estimate its remaining useful life based on its operating conditions. System degradation is generally unknown and requires shutting down the system to be observed. However, this is difficult or even impossible during system operation. Though, a set of observable quantities on a system or component can characterise the level of degradation and help to estimate the remaining useful life of components and systems. The DPGM provides an approach suitable for modelling the evolution of the health state of systems and components. The aim of this thesis is to transpose and capitalize on the experience of these previous works in a prognostic context on the basis of a more efficient DPGM taking into account the available knowledge on the system. We extend the classical HMM family models to the IOHMM to allow the time propagation of uncertainty to address prognostic problems. This research includes the extension of learning and inference algorithms. Variants of the HMM model are proposed to incorporate the operating environment into the prognosis. The aim of this thesis is to contribute to solving the following scientific locks: - Considering the state of health whatever the complexity of the system by a stochastic model and learning the model parameters from the available measurements on the system. - Establish a diagnosis of the state of health of the system and the prognosis of its evolution by integrating several operational conditions. - Estimate the remaining useful life of components and structured systems with series and parallel components. This is a major challenge because the prognosis of the degradation of system components makes it possible to define strategies for either control or maintenance in relation to the residual life of the system. This allows the reduction of the probability of occurrence of a shutdown due to a system malfunction either by adjusting the degradation speed to fit in with a preventive maintenance plan or by proactively planning maintenance interventions
APA, Harvard, Vancouver, ISO, and other styles
49

Hunter, Brandon. "Channel Probing for an Indoor Wireless Communications Channel." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.

Full text
Abstract:
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
APA, Harvard, Vancouver, ISO, and other styles
50

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Full text
Abstract:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography