Dissertations / Theses on the topic 'Multiple robustness'

To see the other types of publications on this topic, follow the link: Multiple robustness.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Multiple robustness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Miller, Charles W. "Familywise Robustness Criteria Revisited for Newer Multiple Testing Procedures." Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/40501.

Full text
Abstract:
Statistics
Ph.D.
As the availability of large datasets becomes more prevalent, so does the need to discover significant findings among a large collection of hypotheses. Multiple testing procedures (MTP) are used to control the familywise error rate (FWER) or the chance to commit at least one type I error when performing multiple hypotheses testing. When controlling the FWER, the power of a MTP to detect significant differences decreases as the number of hypotheses increases. It would be ideal to discover the same false null hypotheses despite the family of hypotheses chosen to be tested. Holland and Cheung (2002) developed measures called familywise robustness criteria (FWR) to study the effect of family size on the acceptance and rejection of a hypothesis. Their analysis focused on procedures that controlled FWER and false discovery rate (FDR). Newer MTPs have since been developed which control the generalized FWER (gFWER (k) or k-FWER) and false discovery proportion (FDP) or tail probabilities for the proportion of false positives (TPPFP). This dissertation reviews these newer procedures and then discusses the effect of family size using the FWRs of Holland and Cheung. In the case where the test statistics are independent and the null hypotheses are all true, the Type R enlargement familywise robustness measure can be expressed as a ratio of the expected number of Type I errors. In simulations, positive dependence among the test statistics was introduced, the expected number of Type I errors and the Type R enlargement FWR increased for step-up procedures with higher levels of correlation, but not for step-down or single-step procedures.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Wei. "Robustness against interference in Internet of Things." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10195/document.

Full text
Abstract:
L'Internet des objets, plus particulièrement les réseaux de capteurs, a attiré beaucoup d'attention ces dernières années. Sa mise en œuvre soulève de nombreuses difficultés, comme la génération d'interférences d'accès multiple (MAI) à caractère impulsif et la consommation d'énergie relativement forte. Les MAI et le bruit thermique doivent être considérés simultanément car ils perturbent fortement les communications. Nous modélisons les MAI et le bruit thermique respectivement par la distribution stable et gaussienne. Nous étudions tout d’abord l’effet des turbo-codes sur le lien direct en utilisant la norme-p comme métrique de décodage. Cette métrique permet une performance de correction d’erreur proche du décodeur optimal. Ensuite nous nous penchons sur les communications coopératives. A l’aide de l’échantillonnage préférentiel, nous estimons les densités de probabilité de la décision statistique du récepteur optimal en présence des bruits stable et gaussien. Cette approche est coûteuse en calcul. Nous proposons donc d’approximer ces densités de probabilité par la distribution gaussienne inverse normale (NIG). Cette solution de calcul est efficace pour approcher le récepteur optimal. Nous montrons également que le récepteur utilisant la norme-p a des performances robustes, quel que soit le type de bruit dominant. A la fin nous combinons les travaux du codage canal et des communications coopératives pour établir une stratégie de codage canal distribué. Basé sur la qualité du lien direct et le niveau de taux d’erreur binaire envisagé, la stratégie d’économie d’énergie peut être mise en place via le choix d’un schéma de codage canal distribué
Internet of Things brought great interests in recent years for its attractive applications and intelligent structure. However, the implementation of sensor networks still presents important challenges such as the generation of Multiple-Access-Interference (MAI) with impulsive nature and the relatively high energy consumption. Both the MAI and the thermal noise should be considered due to their strong impairments each may cause on the communication quality. We employ the stable and Gaussian distributions to model the MAI and the thermal noise respectively. Firstly we study the performance of turbo codes in the direct link and we propose the p-norm as a decoding metric. This metric allows a considerable error correction performance improvement which is close to the optimal decoder. Then we investigate cooperative communications. The probability densities in the decision statistic of the optimal receiver are estimated using importance sampling approach when both the stable and Gaussian noises are present. Such a method is computationally expensive. Hence we develop an approximation approach based on the Normal Inverse Gaussian (NIG) distribution. This solution is efficient for calculation and is proximate to the optimal receiver. In addition we show that the p-norm receiver has robust performance no matter what kind of noise is dominant. At last we combine the channel coding and cooperative communication works to establish a distributed channel coding strategy. Through some simulation assessments, the energy saving strategy can be realized by choosing an appropriate distributed channel coding scheme based on the direct link quality and target bit error rate
APA, Harvard, Vancouver, ISO, and other styles
3

Sandsveden, Li. "Evaluation of the Robustness of the Brain Parenchymal Fraction for Brain Atrophy Measurements." Thesis, Linköpings universitet, Medicinsk informatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105801.

Full text
Abstract:
In certain diseases, like Multiple Sklerosis and Alzheimer's disease, the progression of the disease can be measured by whole brain atrophy. A difficulty with this is that all people have very different scull sizes, thus also very different brain sizes. This makes it almost impossible to establish "normal values" for brain size. The spread is very large and the method is not practical to use for individual patients. A method with less spread in healthy persons is to use the ratio of the Brain Parenchymal Fraction (BPF). The use of Brain Parenchymal Fraction has increased steadily since it was first introduced in 1999.  BPF = BPV/ICV This study was performed to increase the knowledge of what is normal and to evaluate the robustness of the BPF as a measurement for brain atrophy. Among other things, the change in the BPF when calculated from incomplete volumes (parts of the scull missing in the set of MR images) was evaluated.  The results show that when parts are missing from the top (superior) of the scull the resulting BPF is strictly higher than the correct PBF and when parts are missing from the lower (inferior) part of teh scull the resulting BPF is stritly lower than teh correct value.  Two different methods where tried to compensate for missing parts. The first method was to find a variable factor to compensate with, the size of this factor was depending on how much of the scull that was missing. The second method was to interpolate the ICV and BPV curves and from the new interpolated curves, calculate a new BPF. The method of compensating incomplete volumes using a factor calculated as a function of the intercranial volume of the first/last available slice turned out to be the better.
APA, Harvard, Vancouver, ISO, and other styles
4

Manzano, Castro Marc. "New robustness evaluation mechanism for complex networks." Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/295713.

Full text
Abstract:
Network science has significantly advanced in the last decade, providing insights into the underlying structure and dynamics of complex networks. Critical infrastructures such as telecommunication networks play a pivotal role in ensuring the smooth functioning of modern day living. These networks have to constantly deal with failures of their components. In multiple failure scenarios, where traditional protection and restoration schemes are not suitable because of the quantity of resources that would be required, the concept of robustness is used in order to quantify just how good a network is under such a large-scale failure scenario. The aim of this thesis is to, firstly, investigate the current challenges that might lead to multiple failure scenarios of present day networks and, secondly, to propose novel metrics able to quantify the network robustness.
La ciència de les xarxes (o network science) ha avançat significativament en l'última dècada, proporcionant coneixement sobre l'estructura subjacent i la dinàmica de les xarxes complexes (o complex networks). Infraestructures crítiques com xarxes de telecomunicacions, juguen un paper fonamental per garantir el bon funcionament de la vida moderna. Aquestes xarxes han de lidiar constantment amb fallades dels seus components. En escenaris de fallades múltiples, on els esquemes de protecció i restauració tradicionals no són adequats degut a la gran quantitat de recursos que serien necessaris, el concepte de robustesa (o robustness) s'utilitza per tal de quantificar com de bona és una xarxa quan es produeix una fallada a gran escala. L'objectiu d'aquesta tesi és, en primer lloc, investigar les amenaces actuals de les xarxes d'avui en dia que poden portar a escenaris de fallades múltiples i, en segon lloc, proposar nous indicadors capaços de quantificar la robustesa d'aquestes xarxes.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yao. "Load Frequency Control of Multiple-Area Power Systems." Cleveland State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=csu1250196894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Joo, Seang-Hwane. "Robustness of the Within- and Between-Series Estimators to Non-Normal Multiple-Baseline Studies: A Monte Carlo Study." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6715.

Full text
Abstract:
In single-case research, multiple-baseline (MB) design is the most widely used design in practical settings. It provides the opportunity to estimate the treatment effect based on not only within-series comparisons of treatment phase to baseline phase observations, but also time-specific between-series comparisons of observations from those that have started treatment to those that are still in the baseline. In MB studies, the average treatment effect and the variation of these effects across multiple participants can be estimated using various statistical modeling methods. Recently, two types of statistical modeling methods were proposed for analyzing MB studies: a) within-series model and b) between-series model. The within-series model is a typical two-level multilevel modeling approach analyzing the measurement occasions within a participant, whereas the between-series model is an alternative modeling approach analyzing participants’ measurement occasions at certain time points, where some participants are in the baseline phase and others are in the treatment phase. Parameters of both within- and between-series models are generally estimated with restricted maximum likelihood (ReML) estimation and ReML is developed based on the assumption of normality (Hox, et al., 2010; Raudenbush & Bryk, 2002). However, in practical educational and psychological settings, observed data may not be easily assumed to be normal. Therefore, the purpose of this study is to investigate the robustness of analyzing MB studies with the within- and between-series models when level-1 errors are non-normal. A Monte Carlo study was conducted under the conditions where level-1 errors were generated from non-normal distributions in which skewness and kurtosis of the distribution were manipulated. Four statistical approaches were considered for comparison based on theoretical and/or empirical rationales. The approaches were defined by the crossing of two analytic decisions: a) whether to use a within- or between-series estimate of effect and b) whether to use REML estimation with Kenward-Roger adjustment for inferences or Bayesian estimation and inference. The accuracy of parameter estimation and statistical power and Type I error were systematically analyzed. The results of the study showed the within- and between-series models are robust to the non-normality of the level-1 error variance. Both within- and between-series models estimated the treatment effect accurately and statistical inferences were acceptable. ReML and Bayesian estimations also showed similar results in the current study. Applications and implications for applied and methodology researchers are discussed based on the findings of the study.
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Xiyin. "Security and robustness of a modified parameter modulation communication scheme." Thesis, Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-04072009-204834/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Guoqing. "Assessment of risk of disproportionate collapse of steel building structures exposed to multiple hazards." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41079.

Full text
Abstract:
Vulnerability of buildings to disproportionate (or progressive) collapse has become an increasingly important performance issue following the collapses of the Alfred P. Murrah Federal Building in Oklahoma City in 1995 and the World Trade Center in 2001. Although considerable research has been conducted on this topic, there are still numerous unresolved research issues. This dissertation is aimed at developing structural models and analysis procedures for robustness assessment of steel building structures typical of construction practices in the United States, and assessing the performance of these typical structures. Beam-column connections are usually the most vulnerable elements in steel buildings structures suffering local damage. Models of three typical frame connections for use in robustness assessment have been developed with different techniques, depending on the experimental data available to support such models. A probabilistic model of a pre-Northridge moment-resisting connection was developed through finite element simulations, in which the uncertainties in the initial flaw size, beam yield strength and fracture toughness of the weld were considered. A macro-model for a bolted T-stub connections was developed by considering the behavior of each connection element individually (i.e. T-stub, shear tab and panel zone) and assembling the elements to form a complete connection model, which was subsequently calibrated to experimental data. For modeling riveted connections in older steel buildings that might be candidates for rehabilitation, a new method was proposed to take advantage of available experimental data from tests of earthquake-resistant connections and to take into account the effects of the unequal compressive and tensile stiffnesses of top and bottom parts in a connection and catenary action. These connection models were integrated into nonlinear finite element models of structural systems to allow the effect of catenary and other large-deformation action on the behavior of the frames and their connections following initial local structural damage to be assessed. The performance of pre-Northridge moment-resisting frames was assessed with both mean-centered deterministic and probabilistic assessment procedures; the significance of uncertainties in collapse assessment was examined by comparing the results from both procedures. A deterministic assessment of frames with full and partial-strength bolted T-stub connections was conducted considering three typical beam spans in both directions. The vulnerability of an older steel building with riveted connections was also analyzed deterministically. The contributions from unreinforced masonry infill panels and reinforced concrete slabs on the behavior of the building were investigated. To meet the need for a relatively simple procedure for preliminary vulnerability assessment, an energy-based nonlinear static pushdown analysis procedure was developed. This procedure provides an alternative method of static analysis of disproportionate collapse vulnerability that can be used as an assessment tool for regular building frames subjected to local damage. Through modal analysis, dominant vibration modes of a damaged frame were first identified. The structure was divided into two parts, each of which had different vibration characteristics and was modeled by a single degree-of-freedom (SDOF) system separately. The predictions were found to be sufficiently close to the results of a nonlinear dynamic time history analysis (NTHA) that the method would be useful for collapse-resistant design of buildings with regular steel framing systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Hajian-Tilaki, Karimollah. "Methodologic contributions to ROC analysis : a study of the robustness of the binormal model for quantitative data and methods for studies involving multiple signals." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29034.

Full text
Abstract:
The purpose of this dissertation is twofold: (i) to examine the robustness of the binormal model as a "semi-parametric" approach to ROC analysis for quantitative diagnostic tests; (ii) to develop nonparametric methods for ROC analysis of data concerning multiple "signals".
Metz et al (1990) adapted the binormal model, used previously for rating data only, for ROC analysis of quantitative diagnostic tests. Their investigation of its performance was limited to data generated from the bi-normal model itself. Part (i) of this thesis describes a broader numerical investigation to assess how it performs in various configurations of non-binormal pairs of distributions, where one or both pair members were mixtures of Gaussian (MG) distributions. We also investigated the effects of sample size and the number of data categories used. Three criteria, bias in estimates of the area under the curve (AUC), bias in estimated true positive fraction (TPF's) at specific false positive fraction (FPF) points, and discrepancies between the estimated and true TPF over the wider portion of the ROC curve, were used to assess the impact of departures from binormality. The bias in the estimates of AUC was small for all configurations studied, no matter what amount of discretization and what sample sizes were used. By the other criteria, the binormal model was robust to departures involving $ {$G, MG--skewed or bimodal$ }$ pairs. The fits were less appropriate at FPF = 0.05 and 0.10 when both pair-members were skewed to the right, but even then the bias in estimates of TPF was less than 0.06. The "semi-parametric" and nonparametric approaches yielded very similar estimates of AUC and of the corresponding sampling variability.
Part (ii) develops nonparametric ROC analysis for the situation when pathology and test interpretation data for each patient are K-dimensional. The approach computes K "pseudo-accuracies" for each patient; from these, K U-statistics are derived. One can form a summary index from these K components, as well as the standard error (SE) of this index based on the observed correlations among the pseudo-accuracies. The applicability of a simplified formula for the SE was assessed. The method was also extended to comparisons of two diagnostic systems. The procedures are illustrated using data sets from two clinical studies. The approach can handle the complex structure of multi-signal ROC data; it takes the various inter-correlations into account, and makes efficient use of the data.
APA, Harvard, Vancouver, ISO, and other styles
10

Nair, Suraj [Verfasser], Alois [Akademischer Betreuer] Knoll, and Dieter [Akademischer Betreuer] Fox. "Visual Tracking of Multiple Humans with Machine Learning based Robustness Enhancement applied to Real-World Robotic Systems / Suraj Nair. Gutachter: Alois Knoll ; Dieter Fox. Betreuer: Alois Knoll." München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1031076166/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Záchej, Samuel. "Monte Carlo simulace elektronového rozptylu v rastrovacím prozařovacím elektronovém mikroskopu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-242036.

Full text
Abstract:
This thesis deals with an electron scattering in STEM microscopy on objects with dif-ferent shapes, such as cuboid, sphere and hollow capsule. Monte Carlo simulations are used for description of multiple electron scattering. Except the theoretical analysis of the electron scattering and simulation methods, the thesis contains design and realiza-tion of an algorithm simulating electron scattering in given objects. In addition, there is a design for robustness evaluation of the simulation, based on comparison between results and known signals for a given object. Reliability of the algorithm was verified by experimental measurements of the electron scattering on a carbon layer.
APA, Harvard, Vancouver, ISO, and other styles
12

Seitz, Timothy M. "Modeling and Robust Stability of Advanced, Distributed Control Systems." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1497201155817062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Coutinho, Flávio Luiz. "Um sistema de rastreamento de olhar tolerante a movimentações da face." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04012012-122544/.

Full text
Abstract:
A crescente capacidade do poder computacional e a proliferação de dispositivos ao nosso redor vem permitindo o desenvolvimento de novas e sofisticadas interfaces para interação humano-computador que reagem à presença e ao estado de seus usuários. Como o olhar tem a capacidade de transmitir muitas informações sobre o usuário, rastreadores de olhar, dispositivos que estimam a direção para onde uma pessoa olha, tem papel importante no desenvolvimento de tais interfaces. Entre suas aplicações temos o auxílio a pessoas com dificuldades motoras, que podem utilizar um rastreador de olhar como substituto ao mouse, aplicações de diagnóstico, que estudam evidências do comportamento humano, ou ainda o desenvolvimento de interfaces que utilizem a informação sobre o olhar como um canal a mais de comunicação com o usuário para perceber suas intenções. Muitas técnicas para atingir tal objetivo foram desenvolvidas mas as tradicionais ainda oferecem certas dificuldades de uso para seus usuários como a intolerância a movimentos de cabeça e a necessidade de calibração por sessão de uso. Neste trabalho fizemos um levantamento de uma série de técnicas de rastreamento de olhar, indo das mais tradicionais até algumas mais recentes que visam melhorar a facilidade de uso destes sistemas. Uma das técnicas mais promissoras utiliza múltiplas fontes de luz fixadas nos cantos do monitor do computador. Através da análise da posição dos reflexos gerados por essas fontes de luz sobre a córnea, juntamente com a informação da posição da pupila, presentes em imagens capturadas do olho, é possível estimar o ponto observado no monitor. Devido às suas vantagens ela foi escolhida para estudo mais detalhado e implementação. Extensos testes utilizando simulações foram realizados para avaliar seu desempenho. Foi também desenvolvida uma extensão dessa técnica, utilizando um modelo mais preciso do olho, visando melhorar sua precisão. Ao final apresentamos nossa implementação, baseada nessa extensão da técnica original, que é tolerante a movimentação da face e mostramos os resultados obtidos em testes realizados com um grupo de usuários.
Recent advances in computing power and the proliferation of computing devices around us allowed the development of new computer interfaces which can react to the presence and state of its users. Since gaze can transmit a lot of information about the user, gaze trackers, devices that can estimate the direction which a person is looking at, have an important role in the development of such interfaces. Among gaze tracking applications, we have aid for people with limited motor skills, human behavior studies, and the development of interfaces that can take gaze information as an additional communication channel with the user. Lots of techniques have been developed to reach this goal, but they have some problems that make them hard to be widely used. These problems are the need of calibration for each use session and the need to keep the user\'s head still. In this work we studied some of the existing gaze tracking techniques, from the more traditional ones to more recent ones. One of the most interesting techniques makes use of multiple light sources fixed at the monitor\'s corners. By analyzing the positions of corneal reflections and the pupil present in captured images of the eye, it is possible to estimate the gaze point on the monitor screen. Due to its advantages this technique was chosen for a deeper study and implementation. Lots of experiments using simulated data have been carried out to validate the technique. Using a more accurate model of the eye, an extension for this technique was also developed to increase its precision. Finally, we present our implementation, that allows for large head movement, as well as test results obtained from real users.
APA, Harvard, Vancouver, ISO, and other styles
14

Issury, Irwin. "Contribution au développement d'une stratégie de diagnostic global en fonction des diagnostiqueurs locaux : Application à une mission spatiale." Phd thesis, Bordeaux 1, 2011. http://tel.archives-ouvertes.fr/tel-00643548.

Full text
Abstract:
Les travaux présentés dans ce mémoire traitent de la synthèse d'algorithmes de diagnostic de défauts simples et multiples. L'objectif vise à proposer une stratégie de diagnostic à minimum de redondance analytique en exploitant au mieux les informations de redondance matérielle éventuellement disponibles sur le système. Les développements proposés s'inscrivent dans une démarche de coopération et d'agrégation des méthodes de diagnostic et la construction optimale d'un diagnostic global en fonction des diagnostiqueurs locaux. Les travaux réalisés se veulent génériques dans le sens où ils mêlent à la fois les concepts et outils de deux communautés : ceux de la communauté FDI (Fault Detection and Isolation) et ceux de la communauté DX (Diagnosis) dont les bases méthodologiques sont issues des domaines informatiques et intelligence artificielle. Ainsi, le problème de détection (ainsi que le problème de localisation lorsque les contraintes structurelles le permettent) est résolu à l'aide des outils de la communauté FDI tandis que le problème de localisation est résolu à l'aide des concepts de la communauté DX, offrant ainsi une démarche méthodologique agrégée. La démarche méthodologique se décline en deux étapes principales. La première phase consiste en la construction d'une matrice de signatures mutuellement exclusive. Ainsi, le problème du nombre minimal de relations de redondance analytique (RRA), nécessaires pour établir un diagnostic sans ambiguïté, est abordé. Ce problème est formalisé comme un problème d'optimisation sous contraintes qui est efficacement résolu à l'aide d'un algorithme génétique. La deuxième étape concerne la génération des diagnostics. Ainsi, pour une situation observée, identifier les conflits revient à définir les RRAs non satisfaites par l'observation. Les diagnostics sont obtenus à l'aide d'un algorithme basé sur le concept de formules sous forme MNF (Maximal Normal Form). L'intérêt majeur dans cette approche est sa capacité à traiter le diagnostic des défauts simples et multiples ainsi que le diagnostic des plusieurs modes de fautes (i.e., le diagnostic des différents types de défauts) associés à chaque composant du système surveillé. De plus, il existe des preuves d'optimalité tant au niveau local (preuve de robustesse/sensibilité) qu'au niveau global (preuve de diagnostics minimaux). La méthodologie proposée est appliquée à la mission spatiale Mars Sample Return (MSR). Cette mission, entreprise conjointement entre l'administration nationale de l'aéronautique et de l'espace (NASA) et l'agence spatiale européenne (ESA), vise à ramener des échantillons martiens sur Terre pour des analyses. La phase critique de cette mission est la phase rendez-vous entre le conteneur d'échantillons et l'orbiteur. Les travaux de recherche traitent le problème de diagnostic des défauts capteurs présents sur la chaîne de mesure de l'orbiteur pendant la phase de rendez-vous de la mission. Les résultats, obtenus à l'aide du simulateur haute fidélité de Thalès Alenia Space, montrent la faisabilité et l'efficacité de la méthode.
APA, Harvard, Vancouver, ISO, and other styles
15

Haynes, Michele Ann. "Flexible distributions and statistical models in ranking and selection procedures with applications." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pouilly-Cathelain, Maxime. "Synthèse de correcteurs s’adaptant à des critères multiples de haut niveau par la commande prédictive et les réseaux de neurones." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG019.

Full text
Abstract:
Cette thèse porte sur la commande des systèmes non linéaires soumis à des contraintes non différentiables ou non convexes. L'objectif est de pouvoir réaliser une commande permettant de considérer tout type de contraintes évaluables en temps réel.Pour répondre à cet objectif, la commande prédictive a été utilisée en ajoutant des fonctions barrières à la fonction de coût. Un algorithme d'optimisation sans gradient a permis de résoudre ce problème d'optimisation. De plus, une formulation permettant de garantir la stabilité et la robustesse vis-à-vis de perturbations a été proposée dans le cadre des systèmes linéaires. La démonstration de la stabilité repose sur les ensembles invariants et la théorie de Lyapunov.Dans le cas des systèmes non linéaires, les réseaux de neurones dynamiques ont été utilisés comme modèle de prédiction pour la commande prédictive. L'apprentissage de ces réseaux ainsi que les observateurs non linéaires nécessaires à leur utilisation ont été étudiés. Enfin, notre étude s'est portée sur l'amélioration de la prédiction par réseaux de neurones en présence de perturbations.La méthode de synthèse de correcteurs présentée dans ces travaux a été appliquée à l’évitement d’obstacles par un véhicule autonome
This PhD thesis deals with the control of nonlinear systems subject to nondifferentiable or nonconvex constraints. The objective is to design a control law considering any type of constraints that can be online evaluated.To achieve this goal, model predictive control has been used in addition to barrier functions included in the cost function. A gradient-free optimization algorithm has been used to solve this optimization problem. Besides, a cost function formulation has been proposed to ensure stability and robustness against disturbances for linear systems. The proof of stability is based on invariant sets and the Lyapunov theory.In the case of nonlinear systems, dynamic neural networks have been used as a predictor for model predictive control. Machine learning algorithms and the nonlinear observers required for the use of neural networks have been studied. Finally, our study has focused on improving neural network prediction in the presence of disturbances.The synthesis method presented in this work has been applied to obstacle avoidance by an autonomous vehicle
APA, Harvard, Vancouver, ISO, and other styles
17

"Some results on familywise robustness for multiple comparison procedures." 2005. http://library.cuhk.edu.hk/record=b5892700.

Full text
Abstract:
Chan Ka Man.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 46-48).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Multiple comparison procedures and their applications --- p.1
Chapter 1.2 --- Different types of error control --- p.3
Chapter 1.3 --- Single-step and stepwise procedures --- p.5
Chapter 1.4 --- From familywise error rate control to false discovery rate control --- p.8
Chapter 1.5 --- The FDR procedure of BH --- p.10
Chapter 1.6 --- Application of the FDR procedure --- p.11
Chapter 1.7 --- Family size and family size robustness --- p.16
Chapter 1.8 --- Objectives of the thesis --- p.17
Chapter 2 --- The Familywise Robustness Criteria --- p.18
Chapter 2.1 --- The basic idea of familywise robustness --- p.18
Chapter 2.2 --- Definitions and notations --- p.19
Chapter 2.3 --- The measurement of robustness to changing family size --- p.21
Chapter 2.4 --- Main Theorems --- p.21
Chapter 2.5 --- Example --- p.23
Chapter 2.6 --- Summary --- p.24
Chapter 3 --- FDR and FWR --- p.26
Chapter 3.1 --- Positive false discovery rate --- p.26
Chapter 3.2 --- A unified approach to FDR --- p.29
Chapter 3.3 --- The S procedure --- p.30
Chapter 3.4 --- Family wise robustness criteria and the S procedure --- p.32
Chapter 4 --- Simulation Study --- p.41
Chapter 4.1 --- The setup --- p.41
Chapter 4.2 --- Simulation result --- p.43
Chapter 4.3 --- Conclusions --- p.44
Bibliography --- p.46
APA, Harvard, Vancouver, ISO, and other styles
18

Ribeiro, Filipe Luís Alves. "Robustness Analysis of Structures in Post-Earthquake Scenarios Considering Multiple Hazards." Doctoral thesis, 2017. http://hdl.handle.net/10362/20212.

Full text
Abstract:
Recent earthquakes have highlighted that the consideration of isolated seismic events, although necessary, may not be sufficient to prevent building collapse. In fact, the occurrence of a large number of aftershocks with significant intensity, as well as the occurrence of tsunamis, fires, and explosions, poses a safety threat that has not been addressed properly in the design and assessment of building structures over the last decade. Although research has been developed in order to evaluate the impact of multiple and/or cascading hazards in structural safety and economical losses, there is no established framework to perform such analysis. In addition, the available numerical tools lack a unified implementation in a widely used software in order to allow for the development of large numerical simulations involving these hazard events. This work proposes a probabilistic framework for quantifying the robustness of structures considering the occurrence of a major earthquake (mainshock) and the subsequent cascading hazard events, namely fire and aftershocks. These events can significantly increase the probability of collapse of buildings, especially for structures that are damaged during the mainshock. In order to assess the structural performance under post-earthquake hazards, it is of paramount importance to accurately simulate the damage attained during the earthquake, which is strongly correlated to the residual structural capacity to withstand cascading events. In this context, the influence of ground motion characteristics, namely ground motion duration, has been identified as one of the parameters that may induce significant bias on damage patterns associated with the mainshock. Thus, ground motion duration influence on structural damage is analyzed in this work. Steel moment resisting frame buildings designed according to pre-Northridge codes are analyzed using the proposed framework. These buildings are representative of the design practice in the US and Europe for decades, and the conclusions of this work can be significant in the assessment/retrofit of thousands of buildings. Fragility curves and reliabilitybased robustness measures are obtained using the proposed framework. The fragility curve parameters obtained herein can be used in the development of future probabilistic-based studies considering post-earthquake hazards. The results highlight the importance of the post-earthquake hazard events in the structural safety assessment. Further work is needed in order to better characterize these hazards as to include them in the code-based design and assessment methodologies.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Tae Yoon. "Rate-robustness tradeoffs in multicarrier wireless communications." Thesis, 2006. http://hdl.handle.net/2152/2919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

"Methodical Design Approaches to Multiple Node Collection Robustness for Flip-Flop Soft Error MItigation." Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.29650.

Full text
Abstract:
abstract: The space environment comprises cosmic ray particles, heavy ions and high energy electrons and protons. Microelectronic circuits used in space applications such as satellites and space stations are prone to upsets induced by these particles. With transistor dimensions shrinking due to continued scaling, terrestrial integrated circuits are also increasingly susceptible to radiation upsets. Hence radiation hardening is a requirement for microelectronic circuits used in both space and terrestrial applications. This work begins by exploring the different radiation hardened flip-flops that have been proposed in the literature and classifies them based on the different hardening techniques. A reduced power delay element for the temporal hardening of sequential digital circuits is presented. The delay element single event transient tolerance is demonstrated by simulations using it in a radiation hardened by design master slave flip-flop (FF). Using the proposed delay element saves up to 25% total FF power at 50% activity factor. The delay element is used in the implementation of an 8-bit, 8051 designed in the TSMC 130 nm bulk CMOS. A single impinging ionizing radiation particle is increasingly likely to upset multiple circuit nodes and produce logic transients that contribute to the soft error rate in most modern scaled process technologies. The design of flip-flops is made more difficult with increasing multi-node charge collection, which requires that charge storage and other sensitive nodes be separated so that one impinging radiation particle does not affect redundant nodes simultaneously. We describe a correct-by-construction design methodology to determine a-priori which hardened FF nodes must be separated, as well as a general interleaving scheme to achieve this separation. We apply the methodology to radiation hardened flip-flops and demonstrate optimal circuit physical organization for protection against multi-node charge collection. Finally, the methodology is utilized to provide critical node separation for a new hardened flip-flop design that reduces the power and area by 31% and 35% respectively compared to a temporal FF with similar hardness. The hardness is verified and compared to other published designs via the proposed systematic simulation approach that comprehends multiple node charge collection and tests resiliency to upsets at all internal and input nodes. Comparison of the hardness, as measured by estimated upset cross-section, is made to other published designs. Additionally, the importance of specific circuit design aspects to achieving hardness is shown.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2015
APA, Harvard, Vancouver, ISO, and other styles
21

Ahn, Edwin S. "Addressing Stability Robustness, Period Uncertainties, and Startup of Multiple-Period Repetitive Control for Spacecraft Jitter Mitigation." Thesis, 2013. https://doi.org/10.7916/D8X63VBF.

Full text
Abstract:
Repetitive Control (RC) is a relatively new form of control that seeks to converge to zero tracking error when executing a periodic command, or when executing a constant command in the presence of a periodic disturbance. The design makes use of knowledge of the period of the disturbance or command, and makes use of the error observed in the previous period to update the command in the present period. The usual RC approaches address one period, and this means that potentially they can simultaneously address DC or constant error, the fundamental frequency for that period, and all harmonics up to Nyquist frequency. Spacecraft often have multiple sources of periodic excitation. Slight imbalance in reaction wheels used for attitude control creates three disturbance periods. A special RC structure was developed to allow one to address multiple unrelated periods which is referred to as Multiple-Period Repetitive Control (MPRC). MPRC in practice faces three main challenges for hardware implementation. One is instability due to model errors or parasitic high frequency modes, the second is degradation of the final error level due to period uncertainties or fluctuations, and the third is bad transients due to issues in startup. Regarding these three challenges, the thesis develops a series of methods to enhance the performance of MPRC or to assist in analyzing its performance for mitigating optical jitter induced by mechanical vibration within the structure of a spacecraft testbed. Experimental analysis of MPRC shows contrasting advantages over existing adaptive control algorithms, such as Filtered-X LMS, Adaptive Model Predictive Control, and Adaptive Basis Method, for mitigating jitter within the transmitting beam of Laser Communication (LaserCom) satellites.
APA, Harvard, Vancouver, ISO, and other styles
22

Tsai, Zhi-Ren, and 蔡志仁. "Robustness Design of Fuzzy Control for Nonlinear Multiple Time-Delay Large-scale Systems via Neural-Network-Based Approach." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/97157702482637979986.

Full text
Abstract:
碩士
長庚大學
電機工程研究所
89
The stabilization problem is considered in this study for a nonlinear multiple time-delay large-scale system via neural-network (NN)-based approach. First, the NN model is first employed to approximate each nonlinear multiple time-delay subsystem. Then, a linear difference inclusion (LDI) state-space representation is established for the dynamics of each NN model. According to the LDI state-space representation, a robustness design of fuzzy control is proposed to overcome the effect of modeling errors between the nonlinear multiple time-delay subsystems and the NN models. In terms of Lyapunov's direct method, a delay-dependent stability criterion is hence derived to guarantee the asymptotic stability of nonlinear multiple time-delay large-scale systems. Subsequently, based on this criterion and the decentralized control scheme, a set of fuzzy controllers is synthesized to stabilize the nonlinear multiple time-delay large-scale system. Finally, a numerical example with simulations is given to illustrate the results.
APA, Harvard, Vancouver, ISO, and other styles
23

Mahmoud, El Sayed. "An investigation of a novel analytic model for the fitness of a multiple classifier system." Thesis, 2012. http://hdl.handle.net/10214/4620.

Full text
Abstract:
The growth in the use of machine learning in different areas has revealed challenging classification problems that require robust systems. Multiple Classier Systems (MCSs) have attracted interest from researchers as a method that could address such problems. Optimizing the fitness of an MCS improves its, robustness. The lack of an analysis for MCSs from a fitness perspective is identified. To fill this gap, an analytic model from this perspective is derived mathematically by extending the error analysis introduced by Brown and Kuncheva in 2010. The model relates the fitness of an MCS to the average accuracy, positive-diversity, and negative-diversity of the classifiers that constitute the MCS. The model is verified using a statistical analysis of a Monte-Carlo based simulation. This shows the significance of the indicated relationships by the model. This model provides guidelines for developing robust MCSs. It enables the selection of classifiers which compose an MCS with an improved fitness while improving computational cost by avoiding local calculations. The usefulness of the model for designing classification systems is investigated. A new measure consisting of the accuracy and positive-diversity is developed. This measure evaluates fitness while avoiding many calculations compared to the regular measures. A new system (Gadapt) is developed. Gadapt combines machine learning and genetic algorithms to define subsets of the feature space that closely match true class regions. It uses the new measure as a multi-objective criterion for a multi-objective genetic algorithm to identify the MCSs those create the subsets. The design of Gadapt is validated experimentally. The usefulness of the measure and the method of determining the subsets for the performance of Gadapt are examined based on five generated data sets that represent a wide range of problems. The robustness of Gadapt to small amounts of training data is evaluated in comparison with five existing systems on four benchmark data sets. The performance of Gadapt is evaluated in comparison with eleven existing systems on nine benchmark data sets. The analysis of the experiment results supports the validity of the Gadapt design and the outperforming of Gadapt on the existing systems in terms of robustness and performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Benyamina, Djohara. "Conception des réseaux maillés sans fil à multiples-radios multiples-canaux." Thèse, 2010. http://hdl.handle.net/1866/4317.

Full text
Abstract:
Généralement, les problèmes de conception de réseaux consistent à sélectionner les arcs et les sommets d’un graphe G de sorte que la fonction coût est optimisée et l’ensemble de contraintes impliquant les liens et les sommets dans G sont respectées. Une modification dans le critère d’optimisation et/ou dans l’ensemble de contraintes mène à une nouvelle représentation d’un problème différent. Dans cette thèse, nous nous intéressons au problème de conception d’infrastructure de réseaux maillés sans fil (WMN- Wireless Mesh Network en Anglais) où nous montrons que la conception de tels réseaux se transforme d’un problème d’optimisation standard (la fonction coût est optimisée) à un problème d’optimisation à plusieurs objectifs, pour tenir en compte de nombreux aspects, souvent contradictoires, mais néanmoins incontournables dans la réalité. Cette thèse, composée de trois volets, propose de nouveaux modèles et algorithmes pour la conception de WMNs où rien n’est connu à l’ avance. Le premiervolet est consacré à l’optimisation simultanée de deux objectifs équitablement importants : le coût et la performance du réseau en termes de débit. Trois modèles bi-objectifs qui se différent principalement par l’approche utilisée pour maximiser la performance du réseau sont proposés, résolus et comparés. Le deuxième volet traite le problème de placement de passerelles vu son impact sur la performance et l’extensibilité du réseau. La notion de contraintes de sauts (hop constraints) est introduite dans la conception du réseau pour limiter le délai de transmission. Un nouvel algorithme basé sur une approche de groupage est proposé afin de trouver les positions stratégiques des passerelles qui favorisent l’extensibilité du réseau et augmentent sa performance sans augmenter considérablement le coût total de son installation. Le dernier volet adresse le problème de fiabilité du réseau dans la présence de pannes simples. Prévoir l’installation des composants redondants lors de la phase de conception peut garantir des communications fiables, mais au détriment du coût et de la performance du réseau. Un nouvel algorithme, basé sur l’approche théorique de décomposition en oreilles afin d’installer le minimum nombre de routeurs additionnels pour tolérer les pannes simples, est développé. Afin de résoudre les modèles proposés pour des réseaux de taille réelle, un algorithme évolutionnaire (méta-heuristique), inspiré de la nature, est développé. Finalement, les méthodes et modèles proposés on été évalués par des simulations empiriques et d’événements discrets.
Generally, network design problems consist of selecting links and vertices of a graph G so that a cost function is optimized and all constraints involving links and the vertices in G are met. A change in the criterion of optimization and/or the set of constraints leads to a new representation of a different problem. In this thesis, we consider the problem of designing infrastructure Wireless Mesh Networks (WMNs) where we show that the design of such networks becomes an optimization problem with multiple objectives instead of a standard optimization problem (a cost function is optimized) to take into account many aspects, often contradictory, but nevertheless essential in the reality. This thesis, composed of three parts, introduces new models and algorithms for designing WMNs from scratch. The first part is devoted to the simultaneous optimization of two equally important objectives: cost and network performance in terms of throughput. Three bi-objective models which differ mainly by the approach used to maximize network performance are proposed, solved and compared. The second part deals with the problem of gateways placement, given its impact on network performance and scalability. The concept of hop constraints is introduced into the network design to reduce the transmission delay. A novel algorithm based on a clustering approach is also proposed to find the strategic positions of gateways that support network scalability and increase its performance without significantly increasing the cost of installation. The final section addresses the problem of reliability in the presence of single failures. Allowing the installation of redundant components in the design phase can ensure reliable communications, but at the expense of cost and network performance. A new algorithm is developed based on the theoretical approach of "ear decomposition" to install the minimum number of additional routers to tolerate single failures. In order to solve the proposed models for real-size networks, an evolutionary algorithm (meta-heuristics), inspired from nature, is developed. Finally, the proposed models and methods have been evaluated through empirical and discrete events based simulations.
APA, Harvard, Vancouver, ISO, and other styles
25

Michal, Victoire. "Estimation multi-robuste efficace en présence de données influentes." Thèse, 2019. http://hdl.handle.net/1866/22553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography