Teses / dissertações sobre o tema "Interval filter"

Siga este link para ver outros tipos de publicações sobre o tema: Interval filter.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Interval filter".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Avcu, Soner. "Radar Pulse Repetition Interval Tracking With Kalman Filter". Thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607691/index.pdf.

Texto completo da fonte
Resumo:
In this thesis, the radar pulse repetition interval (PRI) tracking with Kalman Filter problem is investigated. The most common types of PRIs are constant PRI, step (jittered) PRI, staggered PRI, sinusoidally modulated PRI. This thesis considers the step (this type of PRI agility is called as constant PRI when the jitter on PRI values is eliminated) and staggered PRI cases. Different algorithms have been developed for tracking step and staggered PRIs cases. Some useful simplifications are obtained in the algorithm developed for step PRI sequence. Two different algorithms robust to the effects of missing pulses obtained for staggered PRI sequence are compared according to estimation performances. Both algorithms have two parts: detection of the period part and a Kalman filter model. The advantages and disadvantages of these algorithms are presented. Simulations are implemented in MATLAB.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Janapala, Arun. "RR INTERVAL ESTIMATION FROM AN ECG USING A LINEAR DISCRETE KALMAN FILTER". Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3426.

Texto completo da fonte
Resumo:
An electrocardiogram (ECG) is used to monitor the activity of the heart. The human heart beats seventy times on an average per minute. The rate at which a human heart beats can exhibit a periodic variation. This is known as heart rate variability (HRV). Heart rate variability is an important measurement that can predict the survival after a heart attack. Studies have shown that reduced HRV predicts sudden death in patients with Myocardial Infarction (MI). The time interval between each beat is called an RR interval, where the heart rate is given by the reciprocal of the RR interval expressed in beats per minute. For a deeper insight into the dynamics underlying the beat to beat RR variations and for understanding the overall variance in HRV, an accurate method of estimating the RR interval must be obtained. Before an HRV computation can be obtained the quality of the RR interval data obtained must be good and reliable. Most QRS detection algorithms can easily miss a QRS pulse producing unreliable RR interval values. Therefore it is necessary to estimate the RR interval in the presence of missing QRS beats. The approach in this thesis is to apply KALMAN estimation algorithm to the RR interval data calculated from the ECG. The goal is to improve the RR interval values obtained from missed beats of ECG data.
M.S.E.E.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Electrical Engineering
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Motwani, Amit. "Interval Kalman filtering techniques for unmanned surface vehicle navigation". Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3368.

Texto completo da fonte
Resumo:
This thesis is about a robust filtering method known as the interval Kalman filter (IKF), an extension of the Kalman filter (KF) to the domain of interval mathematics. The key limitation of the KF is that it requires precise knowledge of the system dynamics and associated stochastic processes. In many cases however, system models are at best, only approximately known. To overcome this limitation, the idea is to describe the uncertain model coefficients in terms of bounded intervals, and operate the filter within the framework of interval arithmetic. In trying to do so, practical difficulties arise, such as the large overestimation of the resulting set estimates owing to the over conservatism of interval arithmetic. This thesis proposes and demonstrates a novel and effective way to limit such overestimation for the IKF, making it feasible and practical to implement. The theory developed is of general application, but is applied in this work to the heading estimation of the Springer unmanned surface vehicle, which up to now relied solely on the estimates from a traditional KF. However, the IKF itself simply provides the range of possible vehicle headings. In practice, the autonomous steering system requires a single, point-valued estimate of the heading. In order to address this requirement, an innovative approach based on the use of machine learning methods to select an adequate point-valued estimate has been developed. In doing so, the so called weighted IKF (wIKF) estimate provides a single heading estimate that is robust to bounded model uncertainty. In addition, in order to exploit low-cost sensor redundancy, a multi-sensor data fusion algorithm compatible with the wIKF estimates and which additionally provides sensor fault tolerance has been developed. All these techniques have been implemented on the Springer platform and verified experimentally in a series of full-scale trials, presented in the last chapter of the thesis. The outcomes demonstrate that the methods are both feasible and practicable, and that they are far more effective in providing accurate estimates of the vehicle’s heading than the conventional KF when there is uncertainty in the system model and/or sensor failure occurs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Nicklas, Richard B. "An application of a Kalman Filter Fixed Interval Smoothing Algorithm to underwater target tracking". Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25691.

Texto completo da fonte
Resumo:
A Fortran program was developed to implement a Kalman Filter and Fixed Interval Smoothing Algorithm to optimally data tracks generated by the short base-line tracking ranges at the Naval Torpedo Station, Keyport, Washington. The program is designed to run on a personal computer and requires as input a data file consisting of X, Y, and Z position coordinates in sequential order. Data files containing the filtered and smoothed estimates are generated by the program. This algorithm uses a second order linear model to predict a typical target's dynamics. The program listings are includes as appendices. Several runs of the program were performed using actual range data as inputs. Results indicate that the program effectively reduces random noise, thus providing very smooth target tracks which closely follow the raw data. Tracks containing data generated in an overlap region where one array hands off the target to the next array are highlighted. The effects of varying the magnitude of the excitation matrix Q(k) are also explored. This program is seen as a valuable post-data analysis tool for the current tracking range data. In addition, it can easily be modified to provide improved real time, on line tracking using the Kalman Filter portion of the algorithm alone
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Galinis, William J. "Fixed interval smoothing algorithm for an extended Kalman filter for over-the-horizon ship tracking". Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27057.

Texto completo da fonte
Resumo:
The performance of an extended Kalman filter used to track a maneuvering surface target using HFDF lines-of-bearing is substantially improved by implementing a fixed interval smoothing algorithm and a maneuver detection method that uses a noise variance estimator process. This tracking routine is designed and implemented in a computer program developed for this thesis. The Hall noise model is used to accurately evaluate the performance of the tracking algorithm in a noisy environment. Several tracking scenarios are simulated and analyzed. The application of the Kalman tracker to a tropical storm tracking is investigated. Actual storm tracks obtained from the Joint Typhoon Warning Center in Guam, Mariana Islands are used for this research
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Mohammedi, Irryhl. "Contribution à l’estimation robuste par intervalle des systèmes multivariables LTI et LPV : Application aux systèmes aérospatiaux". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0142.

Texto completo da fonte
Resumo:
Les travaux présentés dans ce mémoire de thèse visent à développer de nouvelles approches basées sur une nouvelle classe particulière d’estimateurs d´état : les filtres dits par intervalles. Tout comme la classe des observateurs intervalles, l’objectif est d’estimer les bornes supérieures et inférieures des états d’un système, à chaque instant de temps. L’approche proposée repose sur la théorie des systèmes monotones et sur la connaissance a priori du domaine d’appartenance, supposé borné, des incertitudes de modèle et des entrées exogènes (perturbations, bruit de mesure, etc). L’élément clé de l’approche proposée repose sur l’utilisation de filtre d’ordre quelconque, sans structure a priori fixée, plutôt qu’une structure basée sur l’observateur (reposant uniquement sur une structure dynamique du système étudié). La synthèse des paramètres du filtre repose sur la résolution d’un problème d’optimisation sous contraintes de type inégalités matricielles linéaires et bilinéaires (LMI et BMI) permettant de garantir simultanément les conditions d’existence du filtre ainsi qu’un niveau de performance, soit dans un contexte énergie, soit dans un contexte amplitude ou soit dans un contexte mixte énergie/amplitude. La méthodologie de synthèse proposée est illustrée sur un exemple académique et est comparée avec d’autres méthodes existantes dans la littérature. Enfin, la méthodologie est appliquée au cas du contrôle d’attitude et d’orbite d’un satellite, sous des conditions de simulations réalistes
The work of this thesis aims at developing new approaches based on a new particular class of state estimators, the so-called interval or ensemble filters.Like the class of interval observers, the objective is to estimate, in a guaranteed way, the upper and lower bounds of the states of a system, at each time instant.The proposed approach is based on the theory of monotonic systems and on the knowledge of the domain of membership, supposedly bounded, of the uncertainties of the system, such as disturbances, noise and bias of sensors, etc.The key element of the proposed approach is to use a filter structure advantage, rather than an observer-based structure (relying only on a dynamic structure of the studied system).The synthesis of the filter parameters is based on the resolution of a constrained optimization problem of linear and bilinear matrix inequalities (LMI and BMI) allowing to guarantee simultaneously the existence conditions of the filter as well as a performance level, either in an energy context for LTI systems, or in an amplitude context or in a mixed energy/amplitude context for LPV systemsThe proposed synthesis methodology is illustrated on an academic example and is compared with other existing methods in the literature. Finally, the methodology is applied to the case of attitude and acceleration control of a satellite, under realistic simulation conditions
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Al, Mashhadani Waleed. "The use of multistaic radar in reducing the impact of wind farm on civilian radar system". Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/the-use-of-multistaic-radar-in-reducing-the-impact-of-wind-farm-on-civilian-radar-system(a80fd906-e670-42a0-9efb-ea22250c87f2).html.

Texto completo da fonte
Resumo:
The effects of wind farm installation on the conventional monostatic radar operation have been investigated in previous studies. The interference on radar operation is due to the complex scattering characteristics from the wind turbine structure. This research considers alternative approach for studying and potentially mitigating these negative impacts by adapting the multistatic radar system technique. This radar principle is well known and it is attracting research interest recently, but has not been applied in modelling the wind farm interference on multistatic radar detection and tracking of multiple targets. The research proposes two areas of novelties. The first area includes the simulation tool development of multistatic radar operation near a wind farm environment. The second area includes the adaptation of Range-Only target detection approach based on mathematical and/or statistical methods for target detection and tracking, such as Interval Analysis and Particle Filter. These methods have not been applied against such complex detection scenario of large number of targets within a wind farm environment. Range-Only target detection approach is often considered to achieve flexibility in design and reduction in cost and complexity of the radar system. However, this approach may require advanced signal processing techniques to effectively associate measurements from multiple sensors to estimate targets positions. This issue proved to be more challenging for the complex detection environment of a wind farm due to the increase in number of measurements from the complex radar scattering of each turbine. The research conducts a comparison between Interval Analysis and Particle Filter. The comparison is based on the performance of the two methods according to three aspects; number of real targets detected, number of ghost targets detected and the accuracy of the estimated detections. Different detection scenarios are considered for this comparison, such as single target detection, wind farm detection, and ultimately multiple targets at various elevations within a wind farm environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ipek, Ozlem. "Target Tracking With Phased Array Radar By Using Adaptive Update Rate". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/2/12611589/index.pdf.

Texto completo da fonte
Resumo:
In radar target tracking problems, it may be required to use adaptive update rate in order to maintain the tracking accuracy while allowing the radar to use its resources economically at the same time. This is generally the case if the target trajectory has maneuvering segments and in such a case the use of adaptive update time interval algorithms for estimation of the target state may enhance the tracking accuracy. Conventionally, fixed track update time interval is used in radar target tracking due to the traditional nature of mechanically steerable radars. In this thesis, as an application to phased array radar, the adaptive update rate algorithm approach developed in literature for Alpha-Beta filter is extended to Kalman filter. A survey over relevant adaptive update rate algorithms used previously in literature on radar target tracking is presented including aspects related to the flexibility of these algorithms for the tracking filter. The investigation of the adaptive update rate algorithms is carried out for the Kalman filter for the single target tracking problem where the target has a 90°
maneuvering segment in its trajectory. In this trajectory, the starting and final time instants of the single maneuver are specified clearly, which is important in the assessment of the algorithm performances. The effects of incorporating the variable update time interval into target tracking problem are presented and compared for several different test cases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Nicola, Jérémy. "Robust, precise and reliable simultaneous localization and mapping for and underwater robot. Comparison and combination of probabilistic and set-membership methods for the SLAM problem". Thesis, Brest, 2017. http://www.theses.fr/2017BRES0066/document.

Texto completo da fonte
Resumo:
Dans cette thèse on s'intéresse au problème de la localisation d'un robot sous-marin et de la cartographie en simultané d'un jeu de balises acoustiques installées sur le fond marin, en utilisant un distance-mètre acoustique et une centrale inertielle. Nous nous focalisons sur les deux approches principales utilisées pour résoudre ce type de problème: le filtrage de Kalman et le filtrage ensembliste basé sur l'analyse par intervalles. Le filtre de Kalman est optimal quand les équations d'état du robot sont linéaires et les bruits sont additifs, Gaussiens. Le filtrage par intervalles ne modélise pas les incertitudes dans un cadre probabiliste, et ne fait qu'une seule hypothèse sur leur nature: elles sont bornées. De plus, l'approche utilisant les intervalles permet la propagation rigoureuse des incertitudes, même quand les équations sont non linéaires. Cela résulte en une estimation hautement fiable, au prix d'une précision réduite. Nous montrons que dans un contexte sous-marin, quand le robot est équipé avec une centrale inertielle de haute précision, une partie des équations du SLAM peut raisonnablement être considérée comme linéaire avec un bruit Gaussien additif, en faisant le terrain de jeu idéal d'un filtre de Kalman. De l'autre côté, les équations liées aux observations du distance-mètre acoustique sont bien plus problématiques: le système n'est pas observable, les équations sont non linéaires, et les outliers sont fréquents. Ces conditions sont idéales pour une approche à erreur bornées basée sur l'analyse par intervalles. En prenant avantage des propriétés des bruits Gaussiens, cette thèse réconcilie le traitement probabiliste et ensembliste des incertitudes pour les systèmes aussi bien linéaires que non linéaires sujets à des bruits Gaussiens additifs. En raisonnant de manière géométrique, nous sommes capables d'exprimer la partie des équations du filtre de Kalman modélisant la dynamique du véhicule dans un cadre ensembliste. De la même manière, un traitement plus rigoureux et précis des incertitudes est décrit pour la partie des équations du filtre de Kalman liée aux mesures de distances. Ces outils peuvent ensuite être combinés pour obtenir un algorithme de SLAM qui est fiable, précis et robuste. Certaines des méthodes développées dans cette thèse sont illustrées sur des données réelles
In this thesis, we work on the problem of simultaneously localizing an underwater robot while mapping a set of acoustic beacons lying on the seafloor, using an acoustic range-meter and an inertial navigation system. We focus on the two main approaches classically used to solve this type of problem: Kalman filtering and set-membership filtering using interval analysis. The Kalman filter is optimal when the state equations of the robot are linear, and the noises are additive, white and Gaussian. The interval-based filter do not model uncertainties in a probabilistic framework, and makes only one assumption about their nature: they are bounded. Moreover, the interval-based approach allows to rigorously propagate the uncertainties, even when the equations are non-linear. This results in a high reliability in the set estimate, at the cost of a reduced precision.We show that in a subsea context, when the robot is equipped with a high precision inertial navigation system, a part of the SLAM equations can reasonably be seen as linear with additive Gaussian noise, making it the ideal playground of a Kalman filter. On the other hand, the equations related to the acoustic range-meter are much more problematic: the system is not observable, the equations are non-linear, and the outliers are frequent. These conditions are ideal for a set-based approach using interval analysis.By taking advantage of the properties of Gaussian noises, this thesis reconciles the probabilistic and set-membership processing of uncertainties for both linear and non-linear systems with additive Gaussian noises. By reasoning geometrically, we are able to express the part of the Kalman filter equations linked to the dynamics of the vehicle in a set-membership context. In the same way, a more rigorous and precise treatment of uncertainties is described for the part of the Kalman filter linked to the range-measurements. These two tools can then be combined to obtain a SLAM algorithm that is reliable, precise and robust. Some of the methods developed during this thesis are demonstrated on real data
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Akhbari, Mahsa. "Analyse des intervalles ECG inter- et intra-battement sur des modèles d'espace d'état et de Markov cachés". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT026.

Texto completo da fonte
Resumo:
Les maladies cardiovasculaires sont l'une des principales causes de mortalité chez l'homme. Une façon de diagnostiquer des maladies cardiaques et des anomalies est le traitement de signaux cardiaques tels que le ECG. Dans beaucoup de ces traitements, des caractéristiques inter-battements et intra-battements de signaux ECG doivent être extraites. Ces caractéristiques comprennent les points de repère des ondes de l’ECG (leur début, leur fin et leur point de pic), les intervalles significatifs et les segments qui peuvent être définis pour le signal ECG. L'extraction des points de référence de l'ECG consiste à identifier l'emplacement du pic, de début et de la fin de l'onde P, du complexe QRS et de l'onde T. Ces points véhiculent des informations cliniquement utiles, mais la segmentation precise de chaque battement de l'ECG est une tâche difficile, même pour les cardiologues expérimentés.Dans cette thèse, nous utilisons un cadre bayésien basé sur le modèle dynamique d'ECG proposé par McSharry. Depuis ce modèle s'appuyant sur la morphologie des ECG, il peut être utile pour la segmentation et l'analyse d'intervalles d'ECG. Afin de tenir compte de la séquentialité des ondes P, QRS et T, nous utiliserons également l'approche de Markov et des modèles de Markov cachés (MMC). En bref dans cette thèse, nous utilisons un modèle dynamique (filtre de Kalman), un modèle séquentiel (MMC) et leur combinaison (commutation de filtres de Kalman (SKF)). Nous proposons trois méthodes à base de filtres de Kalman, une méthode basée sur les MMC et un procédé à base de SKF. Nous utilisons les méthodes proposées pour l'extraction de points de référence et l'analyse d'intervalles des ECG. Le méthodes basées sur le filtrage de Kalman sont également utilisés pour le débruitage d'ECG, la détection de l'alternation de l'onde T, et la détection du pic R de l'ECG du foetus.Pour évaluer les performances des méthodes proposées pour l'extraction des points de référence de l'ECG, nous utilisons la base de données "Physionet QT", et une base de données "Swine" qui comprennent ECG annotations de signaux par les médecins. Pour le débruitage d'ECG, nous utilisons les bases de données "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" et "MIT-BIH noise stress test". La base de données "TWA Challenge 2008 database" est utilisée pour la détection de l'alternation de l'onde T. Enfin, la base de données "Physionet Computing in Cardiology Challenge 2013 database" est utilisée pour la détection du pic R de l'ECG du feotus. Pour l'extraction de points de reference, la performance des méthodes proposées sont évaluées en termes de moyenne, écart-type et l'erreur quadratique moyenne (EQM). Nous calculons aussi la sensibilité des méthodes. Pour le débruitage d'ECG, nous comparons les méthodes en terme d'amélioration du rapport signal à bruit
Cardiovascular diseases are one of the major causes of mortality in humans. One way to diagnose heart diseases and abnormalities is processing of cardiac signals such as ECG. In many of these processes, inter-beat and intra-beat features of ECG signal must be extracted. These features include peak, onset and offset of ECG waves, meaningful intervals and segments that can be defined for ECG signal. ECG fiducial point (FP) extraction refers to identifying the location of the peak as well as the onset and offset of the P-wave, QRS complex and T-wave which convey clinically useful information. However, the precise segmentation of each ECG beat is a difficult task, even for experienced cardiologists.In this thesis, we use a Bayesian framework based on the McSharry ECG dynamical model for ECG FP extraction. Since this framework is based on the morphology of ECG waves, it can be useful for ECG segmentation and interval analysis. In order to consider the time sequential property of ECG signal, we also use the Markovian approach and hidden Markov models (HMM). In brief in this thesis, we use dynamic model (Kalman filter), sequential model (HMM) and their combination (switching Kalman filter (SKF)). We propose three Kalman-based methods, an HMM-based method and a SKF-based method. We use the proposed methods for ECG FP extraction and ECG interval analysis. Kalman-based methods are also used for ECG denoising, T-wave alternans (TWA) detection and fetal ECG R-peak detection.To evaluate the performance of proposed methods for ECG FP extraction, we use the "Physionet QT database", and a "Swine ECG database" that include ECG signal annotations by physicians. For ECG denoising, we use the "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" and "MIT-BIH noise stress test" databases. "TWA Challenge 2008 database" is used for TWA detection and finally, "Physionet Computing in Cardiology Challenge 2013 database" is used for R-peak detection of fetal ECG. In ECG FP extraction, the performance of the proposed methods are evaluated in terms of mean, standard deviation and root mean square of error. We also calculate the Sensitivity for methods. For ECG denoising, we compare methods in their obtained SNR improvement
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Dandach, Hoda. "Prédiction de l'espace navigable par l'approche ensembliste pour un véhicule routier". Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP1892/document.

Texto completo da fonte
Resumo:
Les travaux de cette thèse porte sur le calcul d’un espace d’état navigable d’un véhicule routier, ainsi que sur l’observation et l’estimation de son état, à l’aide des méthodes ensemblistes par intervalles. Dans la première partie de la thèse, nous nous intéressons aux problèmes d’estimation d’état relevant de la dynamique du véhicule. Classiquement, l’estimation se fait en utilisant le filtrage de Kalman pour des problèmes d’estimation linéaires ou le filtrage de Kalman étendu pour les cas non-linéaires. Ces filtres supposent que les erreurs sur le modèle et sur les mesures sont blanches et gaussiennes. D’autre part, les filtres particulaires (PF), aussi connus comme Méthodes de Monte-Carlo séquentielles, constituent souvent une alternative aux filtres de Kalman étendus. Par contre, les performances des filtres PF dépendent surtout du nombre de particules utilisées pour l’estimation, et sont souvent affectées par les bruits de mesures aberrants. Ainsi, l’objectif principal de cette partie de travail est d’utiliser une des méthodes à erreurs bornées, qui est le filtrage par boites particulaires (Box Particle Filter (BPF)), pour répondre à ces problèmes. Cette méthode généralise le filtrage particulaire à l’aide des boites remplaçant les particules. A l’aide de l’analyse par intervalles, l’estimation de certains variables fortement reliées à la dynamique du véhicule comme le transfert de charge latérale, le roulis et la vitesse de roulis est donnée, à chaque instant, sous forme d’un intervalle contenant la vraie valeur simulée. Dans la deuxième partie de la thèse, une nouvelle formalisation du problème de calcul de l’espace navigable de l’état d’un véhicule routier est présentée. Un algorithme de résolution est construit, basé sur le principe de l’inversion ensembliste par intervalles et sur la satisfaction des contraintes. Nous cherchons à caractériser l’ensemble des valeurs de la vitesse longitudinale et la dérive au centre de gravité qui correspondent à un comportement stable du véhicule : pas de renversement ni dérapage. Pour décrire le risque de renversement, nous avons utilisé l’indicateur de transfert de charge latéral (LTR). Pour décrire le risque de dérapage, nous avons utilisé les dérives des roues. Toutes les variables sont liées géométriquement avec le vecteur d’état choisi. En utilisant ces relations, l’inversion ensembliste par intervalles est appliquée afin de trouver l’espace navigable de l’état tel que ces deux risques sont évités. L’algorithme Sivia est implémenté, approximant ainsi cet espace. Une vitesse maximale autorisée au véhicule est déduite. Elle est associée à un angle de braquage donné sur une trajectoire connue
In this thesis, we aim to characterize a vehicle stable state domain, as well as vehicle state estimation, using interval methods.In the first part of this thesis, we are interested in the intelligent vehicle state estimation.The Bayesian approach is one of the most popular and used approaches of estimation. It is based on the calculated probability of the density function which is neither evident nor simple all the time, conditioned on the available measurements.Among the Bayesian approaches, we know the Kalman filter (KF) in its three forms(linear, non linear and unscented). All the Kalman filters assume unimodal Gaussian state and measurement distributions. As an alternative, the Particle Filter(PF) is a sequential Monte Carlo Bayesian estimator. Contrary to Kalman filter,PF is supposed to give more information about the posterior even when it has a multimodal shape or when the noise follows non-Gaussian distribution. However,the PF is very sensitive to the imprecision due by bias or noise, and its efficiency and accuracy depend mainly on the number of propagated particles which can easily and significantly increase as a result of this imprecision. In this part, we introduce the interval framework to deal with the problems of the non-white biased measurements and bounded errors. We use the Box Particle Filter (BPF), an estimator based simultaneously on the interval analysis and on the particle approach. We aim to estimate some immeasurable state from the vehicle dynamics using the bounded error Box Particle algorithm, like the roll angle and the lateral load transfer, which are two dynamic states of the vehicle. BPF gives a guaranteed estimation of the state vector. The box encountering the estimation is guaranteed to encounter thereal value of the estimated variable as well.In the second part of this thesis, we aim to compute a vehicle stable state domain.An algorithm, based on the set inversion principle and the constraints satisfaction,is used. Considering the longitudinal velocity and the side slip angle at the vehicle centre of gravity, we characterize the set of these two state variables that corresponds to a stable behaviour : neither roll-over nor sliding. Concerning the roll-over risk,we use the lateral transfer ratio LTR as a risk indicator. Concerning the sliding risk, we use the wheels side slip angles. All these variables are related geometrically to the longitudinal velocity and the side slip angle at the centre of gravity. Using these constraints, the set inversion principle is applied in order to define the set ofthe state variables where the two mentioned risks are avoided. The algorithm of Sivia is implemented. Knowing the vehicle trajectory, a maximal allowed velocityon every part of this trajectory is deduced
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Vincke, Bastien. "Architectures pour des systèmes de localisation et de cartographie simultanées". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00770323.

Texto completo da fonte
Resumo:
La robotique mobile est un domaine en plein essor. L'un des domaines de recherche consiste à permettre à un robot de cartographier son environnement tout en se localisant dans l'espace. Les techniques couramment employées de SLAM (Simultaneous Localization And Mapping) restent généralement coûteuses en termes de puissance de calcul. La tendance actuelle vers la miniaturisation des systèmes impose de restreindre les ressources embarquées. L'ensemble de ces constatations nous ont guidés vers l'intégration d'algorithmes de SLAM sur des architectures adéquates dédiées pour l'embarqué.Les premiers travaux ont consisté à définir une architecture permettant à un robot mobile de se localiser. Cette architecture doit respecter certaines contraintes, notamment celle du temps réel, des dimensions réduites et de la faible consommation énergétique.L'implantation optimisée d'un algorithme (EKF-SLAM), en utilisant au mieux les spécificités architecturales du système (capacités des processeurs, implantation multi-cœurs, calcul vectoriel ou parallélisation sur architecture hétérogène), a permis de démontrer la possibilité de concevoir des systèmes embarqués pour les applications SLAM dans un contexte d'adéquation algorithme architecture. Une seconde approche a été explorée ayant pour objectif la définition d'un système à base d'une architecture reconfigurable (à base de FPGA) permettant la conception d'une architecture fortement parallèle dédiée au SLAM. L'architecture définie a été évaluée en utilisant une méthodologie HIL (Hardware in the Loop).Les principaux algorithmes de SLAM sont conçus autour de la théorie des probabilités, ils ne garantissent en aucun cas les résultats de localisation. Un algorithme de SLAM basé sur la théorie ensembliste a été défini garantissant l'ensemble des résultats obtenus. Plusieurs améliorations algorithmiques sont ensuite proposées. Une comparaison avec les algorithmes probabilistes a mis en avant la robustesse de l'approche ensembliste.Ces travaux de thèse mettent en avant deux contributions principales. La première consiste à affirmer l'importance d'une conception algorithme-architecture pour résoudre la problématique du SLAM. La seconde est la définition d'une méthode ensembliste permettant de garantir les résultats de localisation et de cartographie.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Munsif, Vishal. "Internal Control Reporting by Non-Accelerated Filers". FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/431.

Texto completo da fonte
Resumo:
I examine three issues related to internal control reporting by non-accelerated filers. Motivation for the three studies comes from the fact that Section 404 of the Sarbanes-Oxley Act (SOX) continues to be controversial, as evidenced by the permanent exemption from Section 404(b) of SOX granted to non-accelerated filers by the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. The Dodd-Frank Act also requires the SEC to study compliance costs associated with smaller accelerated filers. In the first part of my dissertation, I document that the audit fee premium for non-accelerated filers disclosing a material weakness in internal controls (a) is significantly lower than the corresponding premium for accelerated filers, and (b) declines significantly over time. I also find that in the case of accelerated filers remediating clients pay lower fees compared to clients continuing to report internal control problems; however, such differences are not observed in the case of non-accelerated filers. The second essay focuses on audit report lag. The results indicate that presence of material weaknesses are associated with increased audit report lags, for both accelerated and non-accelerated filers. The results also indicate that the decline in report lag following remediation of problems is greater for accelerated filers than for non-accelerated filers. The third essay examines early warnings (pursuant to Section 302 disclosures) for firms that subsequently disclosed internal control problems in their 404 reports. The analyses indicate that non-accelerated firms with shorter CFO tenure, presence of accounting experts on the audit committee, and more frequent audit committee meetings are more likely to provide prior Section 302 warnings. Overall the results suggest that there are differences in internal control reporting between the accelerated and non-accelerated filers. The results provide empirical grounding for the ongoing debate about internal control reporting by non-accelerated filers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Zhang, Li. "On the role of internal atmospheric variability in ENSO dynamics". Texas A&M University, 2005. http://hdl.handle.net/1969.1/4310.

Texto completo da fonte
Resumo:
In the first part of this dissertation we use an Intermediate Coupled Model to develop a quantitative test to validate the null hypothesis that low-frequency varia- tion of ENSO predictability may be caused by stochastic processes. Three "perfect model scenario" prediction experiments are carried out, where the model is forced ei- ther solely by stochastic forcing or additionally by decadal-varying backgrounds with different amplitudes. These experiments indicate that one can not simply reject the null hypothesis unless the decadal-varying backgrounds are unrealistically strong. The second part of this dissertation investigates the extent to which internal atmospheric variability (IAV) can influence ENSO variation, and examines the un- derlying physical mechanisms linking IAV to ENSO variability with the aid of a newly developed coupled model consisting of an atmospheric general circulation model and a Zebiak-Cane type of reduced gravity ocean model. A novel noise filter algorithm is developed to suppress IAV in the coupled model. A long control coupled simulation, where the filter is not employed, demonstrates that the coupled model captures many statistical properties of the observed ENSO behavior. It further shows that the development of El Ni~no is linked to a boreal spring phenomenon referred to as the Pacific Meridional Model (MM). The MM, character- ized by an anomalous north-south SST gradient and anomalous surface circulation in the northeasterly trade regime with maximum variance in boreal spring, is inherent to thermodynamic ocean-atmosphere coupling in the Intertropical Convergence Zone latitude. The Northern Pacific Oscillation provides one source of external forcing to excite it. This result supports the hypothesis that the MM works as a conduit for extratropical atmospheric influence on ENSO. A set of coupled simulations, where the filter is used to suppress IAV, indicate that reducing IAV in both wind stress and heat flux substantially weakens ENSO variance. Furthermore, the resultant ENSO cycle becomes more regular and no longer shows strong seasonal phase locking. The seasonal phase locking of ENSO is strongly tied to the IAV in surface heat flux. The ENSO cycle is strongly tied to IAV in surface wind stress.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Callender, Christopher Peter. "Numerically robust implementations of fast recursive least squares adaptive filters using interval arithmetic". Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/10853.

Texto completo da fonte
Resumo:
Algorithms have been developed which perform least squares adaptive filtering with great computational efficiency. Unfortunately, the fast recursive least squares (RLS) algorithms all exhibit numerical instability due to finite precision computational errors, resulting in their failure to produce a useful solution after a short number of iterations. In this thesis, a new solution to this instability problem is considered, making use of interval arithmetic. By modifying the algorithm so that upper and lower bounds are placed on all quantities calculated, it is possible to obtain a measure of confidence in the solution calculated by a fast RLS algorithm and if it is subject to a high degree of inaccuracy due to finite precision computational errors, then the algorithm may be rescued, using a reinitialisation procedure. Simulation results show that the stabilised algorithms offer an accuracy of solution comparable with the standard recursive least squares algorithm. Both floating and fixed point implementations of the interval arithmetic method are simulated and long-term stability is demonstrated in both cases. A hardware verification of the simulation results is also performed, using a digital signal processor(DSP). The results from this indicate that the stabilised fast RLS algorithms are suitable for a number of applications requiring high speed, real time adaptive filtering. A design study for a very large scale integration (VLSI) technology coprocessor, which provides hardware support for interval multiplication, is also considered. This device would enable the hardware realisation of a fast RLS algorithm to operate at far greater speed than that obtained by performing interval multiplication using a DSP. Finally, the results presented in this thesis are summarised and the achievements and limitations of the work are identified. Areas for further research are suggested.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Wang, Li. "Internal surface coating and photochemical modification of polypropylene microfiltration membrane". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ30119.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ispir, Mehmet. "Design Of Moving Target Indication Filters With Non-uniform Pulse Repetition Intervals". Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615361/index.pdf.

Texto completo da fonte
Resumo:
Staggering the pulse repetititon intervals is a widely used solution to alleviate the blind speed problem in Moving Target Indication (MTI) radar systems. It is possible to increase the first blind speed on the order of ten folds with the use of non-uniform sampling. Improvement in blind speed results in passband fluctuations that may degregade the detection performance for particular Doppler frequencies. Therefore, it is important to design MTI filters with non-uniform interpulse periods that have minimum passband ripples with sufficient clutter attenuation along with good range and blind velocity performance. In this thesis work, the design of MTI filters with non-uniform interpulse periods is studied through the least square, convex and min-max filter design methodologies. A trade-off between the contradictory objectives of maximum clutter suppression and minimum desired signal attenuation is established by the introduction of a weight factor into the designs. The weight factor enables the adaptation of MTI filter to different operational scenarios such as the operation under low, medium or high clutter power. The performances of the studied designs are investigated by comparing the frequency response characteristics and the average signal-to-clutter suppression capabilities of the filters with respect to a number of defined performance measures.Two further approaches are considered to increase the signal-to-clutter suppression performance. First approach is based on a modified min-max filter design whereas the second one focuses on the multiple filter implementations. In addition, a detailed review and performance comparison with the non-uniform MTI filter designs from the literature are also given.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Hunter, Robert Peter. "Development of Transparent Soil Testing using Planar Laser Induced Fluorescence in the Study of Internal Erosion of Filters in Embankment Dams". Thesis, University of Canterbury. Geological Sciences, 2012. http://hdl.handle.net/10092/7323.

Texto completo da fonte
Resumo:
A new ‘transparent soil permeameter’ has been developed to study the mechanisms occurring during internal erosion in filter materials for embankment dams. Internal erosion or suffusion is the process where fine particles are removed from a matrix of coarse grains by seepage of water, and which ultimately leads to instabilities within the soil. The laboratory-based experiments in this thesis utilises a novel approach where up-scaled glass particles are used in place of soil particles, and optically matched oil is used in place of water. Rhodamine dye in the oil allows the fluid to fluoresce brightly when a sheet of laser light is shone through the sample, while the glass particles appear as dark shadows within the plane of the laser sheet. This technique is known as Planar Laser Induced Fluorescence (PLIF) and enables a two-dimensional "slice" or plane of particles and fluid to be viewed inside the permeameter, away from the permeameter walls. During a test, fluid is passed through the solid matrix in upward flow, with the flow rate (therefore hydraulic gradient) being increased in stages until internal erosion or bulk movement of the entire assembly develops and progresses. A high speed camera captures images of the two-dimensional plane over the duration of a test, which are then analysed using Image Pro and ImageJ processing software. Until now, the fundamental mechanisms that lead to internal erosion have been rather speculative, as there has been no way to physically observe the processes behind the initiation and continued movement of particles. This visualisation experiment allows internal erosion mechanisms to be studied away from permeameter walls where boundary effects do not occur. The technique was validated by confirming Darcy’s (1856) law of laminar flow, and Terzaghi’s (1925) theoretical critical hydraulic gradient for an upward flow through materials with no top stress. Results of replicated materials tested by Skempton and Brogan (1994) and Fannin and Moffat (2006) also confirm this methodology to be valid by way of material behaviour, permeability and the alpha factor (Skempton & Brogan 1994). An assessment to predict the stability of soils was carried out using the Kenney and Lau (1985), Kezdi (1979), Burenkova (1993), Wan and Fell (2008) and Istomina (1957) approaches, with the Kenny and Lau and Kezdi methods proving to be the most robust across the particle size distributions tested. In the tests, unstable materials showed a migration of fine grains under hydraulic gradients as low as ic = 0.25, while stable materials showed little movement of particles, and eventually failed by heave. Image processing using Image Pro and ImageJ were successful in producing quantitative results, however with further enhancements to the test equipment and methodology, these could be improved upon. The testing technique developed in this thesis has proven to be successful in the study of internal erosion of filter materials. The technique proves that optically matched glass and oil can behave similarly to soil and water materials as used in previous laboratory testing, and that the PLIF technique and image capturing has merit in understanding the mechanisms occurring during internal erosion processes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Miki, Andrew. "Timing differences, the modality effect and filled interval illusion with rats and pigeons". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60807.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Kubík, Pavel. "Měření intenzity provozu během pevně daných intervalů v AP". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218833.

Texto completo da fonte
Resumo:
The thesis analyzes the network traffic on a router with open source firmware. First is chosen a software platform, based on compatibility with available equipment. Then are assessed properties necessary for the development of custom applications. Support for various programming languages provided by the SDK, development environment and the available modules and libraries, for working with network interface. Based on these factors is then chose method to realize the program. He is implemented on the OpenWRT firmware in C / C + + using network library pcap. These funds are used to capture and analyze network traffic. Obtained data are processed using methods of technical analysis, namely on the basis of moving averages, Stochastic oscillator and Bollinger bands. Based on results of these methods are generated and verified estimates of traffic. They are based on linear extrapolation, simplified for fixed intervals. The validity of each method is verified on base of the estimated value. Method is verified if estimated value of the traffic volume is in the Bollinger band, which is given by the standard deviation. Each method is tested several times in real traffic with different input parameters. Then is evaluated the influence of parameters on the error rate of methods. Individual methods are compared and evaluated based on the behavior in different scenarios and based on the average relative error.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Hwang, Jenq-Fong. "Advanced computer-aided design method on the stress analysis of internal spur gears". Connect to this title online, 1986. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1102453550.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Seblany, Feda. "Filter criterion for granular soils based on the constriction size distribution". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC042/document.

Texto completo da fonte
Resumo:
Les discontinuités granulaires dans les ouvrages hydrauliques ou dans leur fondation constituent une source majeure d’instabilités à l’origine des phénomènes d’érosion. L’érosion interne est ainsi définie comme une migration de particules engendrée par un écoulement interne parasite, dans un sol ou dans un ouvrage en terre. A long terme, les conséquences de cette migration peut affecter la stabilité des ouvrages et peut même conduire à leur rupture. La sécurité des ouvrages en terre dépend principalement de la performance de leurs filtres, c’est-à-dire de la capacité du filtre, mis en interne en phase de conception ou en externe en phase de réparation, à retenir les particules fines. L’espace poral d’un filtre granulaire est divisé en volumes élémentaires, appelés pores, reliés entre eux par des étranglements plus étroits appelés constrictions. Des recherches récentes ont montré que la distribution des tailles de constriction (CSD) joue un rôle fondamental dans la compréhension des propriétés de filtration des sols granulaires. Ce travail vise à étudier la CSD et son impact sur les mécanismes de filtration dans les matériaux granulaires. Pour atteindre cet objectif, deux approches ont été suivies dans ce travail: l’une numérique et l’autre analytique. Dans le cas des matériaux de forme sphérique, la méthode des éléments discrets (DEM) permet de calculer la CSD en s’appuyant sur une partition de Delaunay en tétraèdres. Cependant, une CSD plus réaliste peut être obtenue par association des tétraèdres voisins selon un critère basé sur le chevauchement de leurs sphères de vide inscrites. A partir de cette considération et en se basant sur les modèles analytiques existants, un modèle révisé est proposé pour obtenir rapidement la CSD. Les échantillons DEM générés sont ensuite utilisés pour examiner le potentiel de transport des particules fines à travers un filtre d’épaisseur donnée. Les résultats des essais de filtration numériques menés ont montré une corrélation entre la CSD et la possibilité de migration des particules fines. En conséquence, une formule analytique a été proposée pour calculer le diamètre d’ouverture de contrôle des filtres granulaires. Cette taille caractéristique qui prend en compte la granulométrie et la densité du matériau granulaire, a été introduite dans un critère de filtre construit sur la base de la CSD, en vue de le représenter de manière plus physique. Le critère proposé reproduit correctement des résultats expérimentaux rapportés dans la littérature
The granular discontinuities in hydraulic structures or in their foundation constitute a major source of instabilities causing erosion phenomena, process by which finer soil particles are transported through the voids between coarser particles, under seepage flow. In the long term, the microstructure of the soil will change and the excessive migration become prejudicial to the stability of the structures and may also induce their failure. The safety of earth structures is mainly dependent on the reliability of their filter performance, i.e. the ability of the filter placed inside the structure during construction or outside during repair, to retain fine particles. Indeed, the void space of a granular filter is divided into larger volumes, called pores, connected together by throats or constrictions. Recent researches showed that the distribution of throats (Constriction Size Distribution or CSD) between pores plays a key role to understand the filtration properties of a granular soil. This research is devoted to investigate the constriction sizes and their impact on the mechanisms of filtration in granular spherical materials. To achieve this objective, two approaches were followed in this work: numerical and analytical approaches. In the case of spherical materials, the Discrete Element Method (DEM) can help to compute the CSD using the Delaunay tessellation method. However, a more realistic CSD can be obtained by merging adjacent Delaunay cells based on the concept of the overlap of their maximal inscribed void spheres. Following this consideration and by extending the previously developed analytical models of CSD, a revised model is proposed to quickly obtain the CSD. The DEM data generated are then used to explore the potential of transport of fine particles through a filter of a given thickness by means of numerical filtration tests. A correlation has been found between the CSD and the possibility of migration of fine grains. Accordingly, an analytical formula has been proposed to calculate the controlling constriction size of a filter material. This characteristic size, which takes into account the particle size distribution (PSD) and the density of the material, has been used to reformulate a constriction-based criterion in a more physical manner. The proposed filter design criterion is verified based on experimental data from past studies and a good agreement has been found
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Tran, Tuan Anh. "Cadre unifié pour la modélisation des incertitudes statistiques et bornées : application à la détection et isolation de défauts dans les systèmes dynamiques incertains par estimation". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30292/document.

Texto completo da fonte
Resumo:
Cette thèse porte sur l'estimation d'état des systèmes dynamiques à temps discret dans le contexte de l'intégration d'incertitudes statistiques et à erreurs bornées. Partant du filtre de Kalman intervalle (IKF) et de son amélioration (iIKF), nous proposons un algorithme de filtrage pour des systèmes linéaires dont les bruits sont gaussiens incertains, c'est-à-dire de moyenne et matrice de covariance définies par leur appartenance à des intervalles. Ce nouveau filtre de Kalman intervalle (UBIKF) repose sur la recherche d'une matrice de gain ponctuelle minimisant une borne majorante de l'ensemble des matrices de covariance de l'erreur d'estimation en respectant les bornes des incertitudes paramétriques. Un encadrement de tousles estimés possibles est ensuite déterminé en utilisant l'analyse par intervalles. Le filtre UBIKF permet de réduire à la fois la complexité calculatoire de l'inversion ensembliste des matrices intervalles présent dans le filtre iIKF et le conservatisme des estimations. Nous abordons ensuite différents cadres permettant de représenter des connaissances incomplètes ou imprécises, y compris les fonctions de répartition, la théorie de possibilité et la théorie des fonctions de croyance. Grâce à cette dernière, un modèle sous forme d'une fonction de masse pour une distribution gaussienne multivariée incertaine est proposé. Un filtrage particulaire ensembliste basé sur cette théorie est développé pour des systèmes dynamiques non linéaires dans lesquels les bruits sur la dynamique sont bornés et les erreurs de mesure sont modélisées par une fonction de masse gaussienne incertaine. Enfin, le filtre UBIKF est utilisé pour la détection et l'isolation de défauts en mettant en œuvre le schéma d'observateurs généralisé et l'analyse structurelle. Au travers de différents exemples, la capacité d'isolation de défauts capteurs/actionneurs de cet outil est illustrée et comparée à d'autre approches
This thesis deals with state estimation in discrete-time dynamic systems in the context of the integration of statistical and bounded error uncertainties. Motivated by the drawbacks of the interval Kalman filter (IKF) and its improvement (iIKF), we propose a filtering algorithm for linear systems subject to uncertain Gaussian noises, i.e. with the mean and covariance matrix defined by their membership to intervals. This new interval Kalman filter (UBIKF) relies on finding a punctual gain matrix minimizing an upper bound of the set of estimation error covariance matrices by respecting the bounds of the parametric uncertainties. An envelope containing all possible estimates is then determined using interval analysis. The UBIKF reduces not only the computational complexity of the set inversion of the matrices intervals appearing in the iIKF, but also the conservatism of the estimates. We then discuss different frameworks for representing incomplete or imprecise knowledge, including the cumulative distribution functions, the possibility theory and the theory of belief functions. Thanks to the last, a model in the form of a mass function for an uncertain multivariate Gaussian distribution is proposed. A box particle filter based on this theory is developed for non-linear dynamic systems in which the process noises are bounded and the measurement errors are represented by an uncertain Gaussian mass function. Finally, the UBIKF is applied to fault detection and isolation by implementing the generalized observer scheme and structural analysis. Through various examples, the capacity for detecting and isolating sensor/actuator faults of this tool is illustrated and compared to other approaches
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Rönnqvist, Hans. "Predicting surfacing internal erosion in moraine core dams". Licentiate thesis, KTH, Hydraulic Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-14084.

Texto completo da fonte
Resumo:

Dams that comprise broadly and widely graded glacial materials, such as moraines, have been found to be susceptible to internal erosion, perhaps more than dams of other soil types. Internal erosion washes out fine-grained particles from the filling material; the erosion occurs within the material itself or at an interface to another dam zone, depending on the mode of initiation. Whether or not internal erosion proceeds depend on the adequacy of the filter material. If internal erosion is allowed, it may manifest itself as sinkholes on the crest, increased leakage and muddy seepage once it surfaces, which here is called surfacing internal erosion (i.e. internal erosion in the excessive erosion or continuation phase). In spite of significant developments since the 1980s in the field of internal erosion assessment, the validity of methods developed by others on broadly graded materials are still less clear because most available criteria are based on tests of narrowly graded granular soils. This thesis specifically addresses dams that are composed of broadly graded glacial soils and investigates typical indicators, signs and behaviors of internal erosion prone dams. Based on a review of 90+ existing moraine core dams, which are located mainly in Scandinavia as well as North America and Australia/New Zealand, this thesis will show that not only the filter’s coarseness needs to be reviewed when assessing the potential for internal erosion to surface (i.e., erosion in the excessive or continuing phase); in addition, the grading stability of the filter and the core material as well as non-homogeneities that are caused by filter segregation need to be studied. Cross-referencing between these aspects improves the assessment of potential for internal erosion in dams of broadly graded soils and furthermore it provides aid-to-judgment.


QC 20100715
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Kirkwood, Michael George. "Plastic loads for branch pipe junctions subjected to combined internal pressure and in-plane bending moments". Thesis, University of Liverpool, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.257705.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Olsen, Brian Ottar. "Vibroacoustic power flow in infinite compliant pipes excited by mechanical forces and internal acoustic sources". Thesis, University of Southampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342848.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Ježek, Přemysl. "Pevnostně deformační analýza uchycení filtru pevných částic na traktoru". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230021.

Texto completo da fonte
Resumo:
The subject of this thesis is to carry out stress-strain analysis of diesel particulate filter support on tractor. So it describes sequence of steps for the analysis model, such as geometry description, mesh generation, applied forces definition and others. Result of the analysis is assessed in terms of strength and improvement was proposed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Ziems, Jürgen. "Erosionsbeständigkeit nichtbindiger Lockergesteine". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-87705.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Agnevall, Paula. "Berättarteknik i J.M. Coetzees Disgrace". Thesis, Blekinge Tekniska Högskola, Sektionen för teknokultur, humaniora och samhällsbyggnad, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1397.

Texto completo da fonte
Resumo:
While Disgrace seems to be written from a single perspective, it is in fact multi-layered. In order to support this claim, this essay investigates what the novel's protagonist sees, how he sees it and who is narrating the story, using respectively the narratological key concepts of internal focalization, fallible filter and covert narration. The essay thereafter studies how the novel has affected readers in South Africa and how it is neccessary to challenge the perspective presented in the novel in order to fully understand the text.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Tuffa, Daniel Yadetie. "Laboratory investigation of suffusion on dam core glacial till". Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67027.

Texto completo da fonte
Resumo:
The objective of this study is to provide a better understanding of suffusion characteristics of glacial soils and to present a simple yet reliable assessment procedure for determination of suffusion in the laboratory.Internal erosion by suffusion occurs in the core of an embankment dam when the ability of the soil to resist seepage forces is exceeded and voids are large enough to allow the transport of fine particles through the pores. Soils susceptible to suffusion are described as internally unstable. dams with core of broadly graded glacial moraines (tills) exhibit signs of internal erosion to a larger extent than dams constructed with other types of materials.The Suffusion behavior of glacial soils has been investigated through two different permeameter suffusion test have been employed, small scale permeameter and big scale permeameter. Details of the equipment along with its calibration, testing and sampling procedures are provided.The testing program were performed 9 test with different compaction degree in small scale permeameter and 2 test in big permeameter on internally stable categories of till soil. The categories are defined based on the soil grain size distribution and according to the methods developed by Kenney & Lau and Burenkova.Layers are identified with suffusion if the post-test gradation curve exhibit changes in distribution compared to the initial condition and also the tests revealed that the effect of grain size distribution and relative degree of compaction on the internal erosion susceptibility of glacial till soils at different hydraulic gradients
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Gray, Jeffrey Frank. "Divergence Model for Measurement of Goos-Hanchen Shift". ScholarWorks@UNO, 2007. http://scholarworks.uno.edu/td/584.

Texto completo da fonte
Resumo:
In this effort a new measurement technique for the lateral Goos-Hanchen shift is developed, analyzed, and demonstrated. The new technique uses classical image formation methods fused with modern detection and analysis methods to achieve higher levels of sensitivity than obtained with prior practice. Central to the effort is a new mathematical model of the dispersion seen at a step shadow when the Goos-Hanchen effect occurs near critical angle for total internal reflection. Image processing techniques are applied to measure the intensity distribution transfer function of a new divergence model of the Goos-Hanchen phenomena providing verification of the model. This effort includes mathematical modeling techniques, analytical derivations of governing equations, numerical verification of models and sensitivities, optical design of apparatus, image processing
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Djekanovic, Nikolina. "Design of Resonant Filters for AC Current Magnification : Heating of Li-ion Batteries by Using AC Currents". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247892.

Texto completo da fonte
Resumo:
Using alternating current in order to heat batteries at sub-zero temperatures is a method,which is investigated in-depth by an increasing number of study groups. The thesis considersthe resonance phenomenon with the intention to use alternating current amplificationand battery’s impedance in order to induce power dissipation inside the battery, and in thisway increase its temperature. A battery cell is thereby modelled as an impedance transferfunction, estimated from electrochemical impedance spectroscopy measurements, whichare taken for a LiNi 13Mn13Co13O2 cell. Note that at 1 kHz and room temperature (20 ◦C),the ohmic resistance of the selected cell amounts to only 0.76m. Five resonant circuitsare investigated and one of them is selected for further investigation, and as a basis for afilter design. The chosen resonant circuit lead to an LCL filter with current magnification.The experimental setup used for conducting practical experiments, offers the possibilityof operating the voltage source converter both as a Full-bridge and as a Half-bridge, withand without current control. For each possible configuration, an LCL filter and a currentcontroller are designed, taking into account the corresponding limitations in frequency,current and controller voltage. The filter design is based on a multiobjective optimizationmethod used to determine filter components that yield the highest gain value for everyconfiguration. The method minimizes two objective functions in order to find an optimalsolution. The first objective is the reversed absolute value of the gain, whereas thesecond one is the absolute impedance of the circuit, consisting of the filter and batterycells. The gain is thereby defined as the ratio between the induced cell current and thecurrent entering the circuit. The obtained results of the proposed method are experimentallyvalidated. Depending on how the filters were physically designed and taking intoaccount the corresponding voltage source converter configuration, gains of 16 were experimentallyachieved. Finally, the three investigated configurations are compared againstthe reference case (Half-bridge voltage source converter with current control and a singleinductor) regarding their power efficiencies. The power measurements showed that despitehigh obtained gains, the overall filter power losses remained approximately in thesame range, compared to the power losses of the reference case. This is due to the factthat stray resistances of the designed LCL filters easily reached values of around 40m,which hindered an efficient power transfer with the chosen voltage source converter andthe used battery cells. This further indicates the importance of building filters with lowstray resistances and in this thesis, it represents a primary source of improvement.
Användandet av växelströ m fö r att värma upp batterier är en metod som fö r närvarande undersö ks av ett flertal forskargupper. Detta examensarbete fokuserar kring hur resonans kan nyttjas fö r att ö ka strö mfö rstärkningen och, pådetta sätt, ö ka effektutvecklingen i batteriet (av LiNi1/3Mn1/3Co1/3O2-typ). Battericellens impedans modelleras som en ö verfö ringsfunktion vars parametrar estimerats från tidigare genomfö rda impedansspektroskopimätningar. Vid 1 kHz och rumstemperatur är den cellens ohmska resistansen endast 0.76 mΩ. Fem mö jliga resonanta kretsar har undersö kts och en av dem valts ut fö r vidare undersö kningar. The utvalda kretsen är ett LCL-filter med vilken strö mfö rstärkning åstadkoms. Den experimentella uppställningen, i vilken praktiska test har genomfö rts, medger mö jligheten att nyttja den tillhö rande omriktaren både som en helbrygga och en halvbrygga, med och utan strö mreglering. Fö r varje mö jlig omriktarkonfiguration har ett LCL-filter och en strö mreglering tagits fram, med hänsyn tagen till uppställningens begränsningar i termer av frekvens, strö moch dc-spänningsnivå. Filtren är framtagna med hjälp av en multiobjektiv optimering vilken åstadkommer hö gsta strö mfö rstärkning mö jlig fö r varje omriktare och strö mregleringsval. Metoden minimerar tvåfunktioner fö r att finna en optimal lö sning. Den fö rsta funktionen beskriver inversen påströ mfö rstärkningen och den andra lastens (bestående av filter och tillhö rande battericell) impedans absolutbelopp. Den resulterande ö har validerats experimentellt och en strö mfö rstärkningsnivåpå 16 uppnåddes. Slutligen har de olika konfigurationerna jämfö rts i termer av verknings-grad. De genomfö rda effektmätningarna visar att trots att hö ga strö mfö rstärkningsnivåer var mö jliga såresulterade de associerade filterfö rlusterna till liknande verkningsgrader fö r alla studerade konfigurationer. Resultaten understryker fö rdelarna med hö geffektiva filtervilka representerar en mö jlig väg fö r vidare undersö kningar.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Cauquil, Philippe. "Intérêt d'un apport protéique au cours du petit déjeuner : résultat d'une enquête effectuée dans un internat de jeunes filles". Montpellier 1, 1993. http://www.theses.fr/1993MON11192.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Neuzil, Anna A. "In the aftermath of migration assessing the social consequences of late 13th and 14th century population movements into southeastern Arizona /". Find on the web (viewed on Oct. 2, 2008), 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1351%5F1%5Fm.pdf&type=application/pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Gafarov, Timur. "Kontrola kvality finančních výkazů pro zavedení systému vnitřní kontroly". Doctoral thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-233732.

Texto completo da fonte
Resumo:
Though at the enterprises the estimation of a financial condition is annually, it is necessary to develop, to improve constantly and to evaluate the system of the internal control, necessary to develop a technique of the reporting quality estimation of the enterprise specially for the certain enterprise in view of all features, to take advantage of statistical data and to draw corresponding conclusions, to make constant monitoring. The purpose of development of the mechanism - detection of deviations of data in the reporting from actual results of activity, definition of clauses causing distortion of a real financial condition of the enterprise, revealing of size of influence of the given distortions and qualities of the reporting as a whole on decision-making, and also revealing of the reasons causing these deviations and distortions, and development of recommendations on corresponding correction separate directions for improvement of quality of the reporting. How can high reporting quality and internal control create an advantage? In survey of institutional investors is reported that investors apply a penalty if they believe a company’s internal control to be insufficient. Sixty-one percent of respondents said they had avoided investing in companies and 48% had de-invested in companies where internal control was considered inadequate. As additional support, they study went on to report that 82% of respondents agreed that good internal control was worth a premium on share price. These institutional investors are pushing for greater transparency on risk issues and related internal control efforts. Simply put, an organization’s ability to implement and maintain a leading-class control framework can create competitive advantage in today’s market. A system of the financial reporting conducting with strong management, quality control and good legislative base is the key factor of economic development. The trust of investors in the financial and not financial information is based on strong Internal Control, high-quality standards of the financial reporting, audit and ethics, thus, standards and Internal Control play the leading part in assistance of economic growth and financial stability in the country. Nevertheless, every company meets the problems of implementation of the internal control. Among them there can be problems in labor qualification, legislation and so on. It is also necessary to examine the successful experience at the micro level.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Ahmed, Istiaque. "NON-ISOTHERMAL NUMERICAL INVESTIGATIONS OF THE EFFECT OF SPEED RATIO AND FILL FACTOR IN AN INTERNAL MIXER FOR TIRE MANUFACTURING PROCESS". University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1524584349185209.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Pepy, Romain. "Vers une planification robuste et sûre pour les systèmes autonomes". Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00845477.

Texto completo da fonte
Resumo:
De nombreux outils existent pour résoudre les problèmes de planification sous contraintes. On peut les regrouper en deux classes principales. Les plus anciens planificateurs utilisent une discrétisation préalable de l'espace d'état. Les plus récents, les planificateurs à échantillonnage, permettent une exploration plus efficace. Ces planificateurs sont utilisés dans de nombreux domaines, comme la chimie, la biologie, la robotique, l'automatique ou encore l'intelligence artificielle. La contribution majeure de nos travaux est d'apporter une réponse au problème de planification de trajectoires en présence d'incertitudes en associant une technique de planification moderne, permettant une exploration rapide de l'espace d'état à des méthodes de localisation permettant de caractériser l'incertitude sur l'état du système à un instant donné. Deux approches ont été suivies. Dans la première, le planificateur utilise une représentation probabiliste de l'état du système à un instant donné, par une densité de probabilité gaussienne. La propagation des erreurs est effectuée en utilisant un filtre de Kalman étendu. Dans la deuxième approche, nous englobons dans un ensemble calculable les états que peut prendre le système à un instant donné compte tenu de bornes sur les erreurs commises. Contrairement à l'approche probabiliste précédente, cette approche permet de fournir une garantie sur la sûreté du système, à condition bien sûr que les hypothèses sur les bruits d'états qui la fondent soient satisfaites.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Sousa, Raphaell Maciel de. "Estrat?gia de controle robusto para filtro ativo paralelo sem detec??o de harm?nicos de correntes". Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15332.

Texto completo da fonte
Resumo:
Made available in DSpace on 2014-12-17T14:55:45Z (GMT). No. of bitstreams: 1 RaphaellMS_DISSERT.pdf: 3087457 bytes, checksum: 184208141b97a58de312de245a6bd3e8 (MD5) Previous issue date: 2011-02-11
Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico
Conventional control strategies used in shunt active power filters (SAPF) employs real-time instantaneous harmonic detection schemes which is usually implements with digital filters. This increase the number of current sensors on the filter structure which results in high costs. Furthermore, these detection schemes introduce time delays which can deteriorate the harmonic compensation performance. Differently from the conventional control schemes, this paper proposes a non-standard control strategy which indirectly regulates the phase currents of the power mains. The reference currents of system are generated by the dc-link voltage controller and is based on the active power balance of SAPF system. The reference currents are aligned to the phase angle of the power mains voltage vector which is obtained by using a dq phase locked loop (PLL) system. The current control strategy is implemented by an adaptive pole placement control strategy integrated to a variable structure control scheme (VS?APPC). In the VS?APPC, the internal model principle (IMP) of reference currents is used for achieving the zero steady state tracking error of the power system currents. This forces the phase current of the system mains to be sinusoidal with low harmonics content. Moreover, the current controllers are implemented on the stationary reference frame to avoid transformations to the mains voltage vector reference coordinates. This proposed current control strategy enhance the performance of SAPF with fast transient response and robustness to parametric uncertainties. Experimental results are showing for determining the effectiveness of SAPF proposed control system
Resumo: As estrat?gias de controle convencionais de filtros ativos de pot?ncia paralelos (FAPP) empregam esquemas de detec??o de harm?nicos em tempo real, usualmente implementados com filtros digitais. Isso aumenta o n?mero de sensores na estrutura do filtro, o que resulta em altos custos. Al?m disso, esses esquemas de detec??o introduzem atrasos que podem deteriorar o desempenho da compensa??o de harm?nicos. Diferentemente dos esquemas de controle convencionais, este artigo prop?e uma nova estrat?gia de controle que regula indiretamente as correntes de fase da rede el?trica. As correntes de refer?ncia do sistema s?o geradas pelo controle de tens?o do barramento CC e s?o baseadas no balan?o de pot?ncia ativa do sistema FAPP. As correntes de refer?ncia s?o alinhadas com o ?ngulo de fase do vetor tens?o da rede, que ? obtido usando um PLL (Phase Locked Loop). O controle de corrente ? implementado por uma estrat?gia de controle adaptativo por aloca??o de p?los, integrada com um esquema de controle com estrutura vari?vel (VS?APPC). No VS?APPC, o princ?pio do modelo interno (IMP) de refer?ncia ? usado para eliminar o erro em regime permanente das correntes do sistema. Isso for?a as correntes de fase do sistema a serem senoidais e com baixo teor de harm?nicos. Al?m disso, os controladores de corrente s?o implementados no referencial estacion?rio para evitar transforma??es nas coordenadas de refer?ncia do vetor tens?o da rede. Esta estrat?gia de controle de corrente melhora a performance do FAPP com uma resposta transit?ria r?pida e robustez a incertezas param?tricas. Resultados experimentais s?o mostrados para demonstrar a efic?cia do sistema de controle proposto para o FAPP
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Gomes, Adriano de Araújo. "Algoritmo das projeções sucessivas para seleção de variáveis em calibração de segunda ordem". Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/8196.

Texto completo da fonte
Resumo:
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-05-12T12:35:36Z No. of bitstreams: 1 arquivo total.pdf: 5933598 bytes, checksum: f90080e0529915a4c5c37308259bee89 (MD5)
Made available in DSpace on 2016-05-12T12:35:36Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 5933598 bytes, checksum: f90080e0529915a4c5c37308259bee89 (MD5) Previous issue date: 2015-06-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
In this work it was developed a new strategy for intervals selection using the successive projections algorithm (SPA) coupled to N-PLS and U-PLS models, both with residual bilinearização (RBL) as a post-calibration step. The new algorithm coupled to N-PLS/RBL models was evaluated in two cases of studies. The first was simulated data for quantitation of two analytes (A and B) in the presence of a single interfering. On the second study was conducted a quantitation of ofloxacin in water in the presence of interferents (ciprofloxacin and danofloxacin) by means of liquid chromatography with diode array detection (LC-DAD) data modeling. The results were compared to the N-PLS/RBL model and the variables selection with the genetic algorithm (GA-N-PLS/RBL). In the first case of study (simulated data) were observed RMSEP values (x 10-3 in arbitrary units) for the analytes A and B in the order of 6.7 to 47.6; 10.6 to 11.4; and 6.0 to 14.0 for the N-PLS/RBL, Ga-N-PLS/RBL and the proposed method, respectively. On the second case of study (HPLC-DAD data) RMSEP value (mg/L) of 0.72 (N-PLS/RBL); 0.70 (GA-N-PLS/RBL) and 0.64 (iSPA N-PLS/RBL) were obtained. When combined with the U-PLS/RBL, the new algorithm was evaluated in the EEM modeling in the presence of inner filter effect. Simulated data and quantitation of phenylephrine in the presence of acetaminophen in water sample and interferences (ibuprofen and acetylsalicylic acid) were used as a case of studies. The results were compared to the U-PLS/RBL and e twell established method PARAFAC. For simulated data was observed the following RMSEP values (in arbitrary units) 1.584; 0.077 and 0.066 for PARAFAC; U-PLS/RBL and the proposed method, respectively. In the quantitation of phenylephrine the found RMSEP (in μg/L) were of 0.164 (PARAFAC); 0.089 (U-PLS/RBL) and 0.069 (ISPA-U-PLS/RBL). In all cases it was shown that variables selection is a useful tool capable of improving accuracy when compared with the respective global models (model without variables selection) leading to more parsimonious models. It was observed in all cases, that the sensitivity loss promoted by variables selection is compensated by using more selective channels, justifying the obtained RMSEP smaller values. Finally, it was also observed that the models based on variables selection such as the proposed method were free from significant bias at 95% confidence.
Neste trabalho foi desenvolvida uma nova estratégia para seleção de intervalos empregando o algoritmo das projeções sucessivas (SPA) acoplado a modelos N-PLS e U-PLS, ambos com etapa pós-calibração de bilinearização residual (RBL). O novo algoritmo acoplado a modelos N-PLS/RBL, foi avaliado em dois estudos de casos. O primeiro envolvendo dados simulados para quantificação de dois analitos (A e B) na presença de um único interferente. No segundo foi conduzida a quantificação de ofloxacina em água na presença de interferentes (ciprofloxacina e danofloxacina) por meio da modelagem de dados cromatografia liquida com detecção por arranjo de diodos (LC-DAD). Os resultados obtidos foram comparados ao modelo N-PLS/RBL e a seleção de variáveis com o algoritmo genético (GA-N-PLS/RBL). No primeiro estudo de caso (dados simulados) foram observados valores de RMSEP (x 10-3 em unidades arbitrárias) para os analitos A e B da ordem de 6,7 e 47,6; 10,6 e 11,4; 6,0 e 14,0 para o N-PLS/RBL, GA-N-PLS/RBL e o método proposto, respectivamente. No segundo estudo de caso (dados HPLC-DAD) valores de RMSEP (em mg/L) de 0,72 (N-PLS/RBL); 0,70 (GA-N-PLS/RBL) e 0,64 (iSPA-N-PLS/RBL) foram obtidos. Quando combinado com o U-PLS/RBL o novo algoritmo foi avaliado na modelagem de EEM em presença efeito de filtro interno. Dados simulados e a quantificação de fenilefrina na presença de paracetamol em amostras de água e interferentes (Ibuprofeno e ácido acetil salicílico) foram usados como estudos de caso. Os resultados obtidos foram comparados ao modelo U-PLS/RBL e ao bem estabelecido método PARAFAC. Para dados simulados foram observado os seguintes valores de RMSEP (em unidades arbitrarias) 1,584; 0,077 e 0,066 para o PARAFAC; U-PLS/RBL e método proposto, respectivamente. Na quantificação de fenilefrina os RMSEP (em μg/L) encontrados foram de 0,164 (PARAFAC); 0,089 (U-PLS/RBL) e 0,069 (iSPA-U-PLS/RBL). Em todos os casos foi demostrado que seleção de variáveis é uma ferramenta útil capaz de melhorar a acurácia quando comparados aos respectivos modelos globais (modelo sem seleção de variáveis) e tornar os modelos mais parcimoniosos. Foi observado ainda para todos os casos, que a perda de sensibilidade promovida pela seleção de variáveis é compensada pelo uso de canais mais seletivos, justificando os menores valores de RMSEP obtidos. E por fim, foi também observado que os modelos baseados em seleção de variáveis como o método proposto foram isentos de bias significativos a 95% de confiança.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Mlčoch, Marek. "Kumulace biologických dat". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219246.

Texto completo da fonte
Resumo:
The thesis deals with the biological data averaging applied to a periodical and repetitive signal, specifically to an ECG signals. There were used signals from MIT-BIH Arrhythmia database and ÚBMI database. Averaging was realized with constant, floating and exponential Windows, where was used the method of addition of the filtered residue. This method is intended to capture the slow variations from the input to the output signal. The outcomes of these methods can be used as a basis for further work, or function as an example of principled methods. Methods and its outcomes were created in Matlab.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Silva, Giovano Camargos da. "Levantamento das curvas de isodose de sementes de 125I utilizando filmes fotográficos". CNEN - Centro de Desenvolvimento da Tecnologia Nuclear, Belo Horizonte, 2013. http://www.bdtd.cdtn.br//tde_busca/arquivo.php?codArquivo=270.

Texto completo da fonte
Resumo:
Nenhuma
A braquiterapia é uma modalidade de radioterapia em que a fonte radioativa é colocada em contato com o tecido a ser tratado. Na braquiterapia intersticial são utilizadas pequenas fontes radioativas seladas, denominadas sementes. A busca por novos modelos de sementes é constante. O CDTN desenvolveu parcialmente um protótipo para tratamento de braquiterapia que consiste de uma matriz cerâmica porosa capaz de incorporar diferentes radionuclídeos. A etapa seguinte ao desenvolvimento de um novo tipo de semente consiste na sua caracterização dosimétrica, que deve ser realizada em concordância com determinados padrões internacionais. Uma metodologia prática, que utiliza filmes fotográficos foi desenvolvida em um trabalho prévio de mestrado do CDTN para caracterizar parcialmente sementes de braquiterapia e avaliar possíveis problemas que podem ocorrer nas fases iniciais do desenvolvimento de novas sementes, tais como baixa taxa de incorporação, que se traduz numa baixa dose depositada ou uma incorporação não homogênea, que resultaria numa deformação espacial do campo de radiação (anisotropia). Essa metodologia foi agora aprimorada e utilizada na obtenção das curvas de isodose de sementes de iodo-125, na obtenção da curva de calibração dos filmes e nas estimativas de dose em meios com diferentes níveis de atenuação da radiação, assim como para diferentes distribuições espaciais de sementes. Como o protótipo da semente do CDTN não foi totalmente finalizado utilizou-se, neste trabalho, como referência, uma semente comercial de iodo-125. A metodologia mostrou-se sensível, podendo ser utilizada para sementes com baixa atividade. A utilização dos filmes fotográficos permite visualizar, caso existam, problemas de incorporação do material radioativo, o que pode ocorrer até mesmo em sementes comerciais. Portanto, o método mostra-se útil para uma verificação rápida de sementes antes de serem utilizadas em pacientes.
Brachytherapy is a form of radiotherapy where a radioactive source is placed in contact with the tissue to be treated. In interstitial brachytherapy, small radioactive sealed sources are used, which are called seeds. The search for new types of seeds is constant. CDTN has partially developed a prototype for brachytherapy treatment consisting of a porous ceramic matrix capable of incorporating different radionuclides. The next step of developing a new type of seed is its dosimetric characterization, which must be performed in accordance with certain international standards. A practical methodology that uses photographic film was developed in a prior masters work accomplished at CDTN to characterize partially brachytherapy seeds and evaluate possible problems that can occur in the early stages of new seeds development, such as low incorporation rate, which translates in a low dose deposition or a inhomogeneous incorporation, resulting in a space radiation field deformation (anisotropy). This method has now been improved and used in obtaining the iodine-125 seeds isodose curves, in the the film calibration curve obtainment and dose values estimates in mediums with different radiation attenuation levels as well as different seeds spatial distributions. As the CDTNs prototype seed has not been fully finalized, in this work, it was used as reference, a commercial seed of iodine-125. The method was sensitive and can be used for seeds with low activity. The use of photographic film allows the visualization of radioactive material incorporation problems, if any, which can occur even in commercial seeds. Therefore, the method shows to be useful for a quick check of seeds before they are used in patients.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Zhang, Mei. "Diagnostic de panne et analyse des causes profondes du système dynamique inversible". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30260/document.

Texto completo da fonte
Resumo:
Beaucoup de services vitaux de la vie quotidienne dépendent de systèmes d'ingénierie hautement complexes et interconnectés; Ces systèmes sont constitués d'un grand nombre de capteurs interconnectés, d'actionneurs et de composants du système. L'étude des systèmes interconnectés joue un rôle important dans l'étude de la fiabilité des systèmes dynamiques; car elle permet d'étudier les propriétés d'un système interconnecté en analysant ses sous-composants moins complexes. Le diagnostic des pannes est essentiel pour assurer des opérations sûres et fiables des systèmes de contrôle interconnectés. Dans toutes les situations, le système global et / ou chaque sous-système peuvent être analysés à différents niveaux pour déterminer la fiabilité du système global. Dans certains cas, il est important de déterminer les informations anormales des variables internes du sous-système local, car ce sont les causes qui contribuent au fonctionnement anormal du processus global. Cette thèse porte sur les défis de l'application de la théorie inverse du système et des techniques FDD basées sur des modèles pour traiter le problème articulaire du diagnostic des fautes et de l'analyse des causes racines (FD et RCA). Nous étudions ensuite le problème de l'inversibilité de la gauche, de l'observabilité et de la diagnosticabilité des fauts du système interconnecté, formant un algorithme FD et RCA multi-niveaux basé sur un modèle. Ce système de diagnostic permet aux composants individuels de surveiller la dynamique interne localement afin d'améliorer l'efficacité du système et de diagnostiquer des ressources de fautes potentielles pour localiser un dysfonctionnement lorsque les performances du système global se dégradent. Par conséquent, un moyen d'une combinaison d'intelligence locale avec une capacité de diagnostic plus avancée pour effectuer des fonctions FDD à différents niveaux du système est fourni. En conséquence, on peut s'attendre à une amélioration de la localisation des fauts et à de meilleurs moyens de maintenance prédictive. La nouvelle structure du système, ainsi que l'algorithme de diagnostic des fautes, met l'accent sur l'importance de la RCA de défaut des dispositifs de terrain, ainsi que sur l'influence de la dynamique interne locale sur la dynamique globale. Les contributions de cette thèse sont les suivantes: Tout d'abord, nous proposons une structure de système non linéaire interconnecté inversible qui garantit le fauts dans le sous-système de périphérique de terrain affecte la sortie mesurée du système global de manière unique et distincte. Une condition nécessaire et suffisante est développée pour assurer l'inversibilité du système interconnecté qui nécessite l'inversibilité de sous-systèmes individuels. Deuxièmement, un observateur interconnecté à deux niveaux est développé; Il se compose de deux estimateurs d'état, vise à fournir des estimations précises des états de chaque sous-système, ainsi que l'interconnexion inconnue. En outre, il fournira également une condition initiale pour le reconstructeur de données et le filtre de fauts local une fois que la procédure FD et RCA est déclenchée par tout fauts. D'une part, la mesure utilisée dans l'estimateur de l'ancien sous-système est supposée non accessible; La solution est de la remplacer par l'estimation fournie par l'estimateur de ce dernier sous-système
Many of the vital services of everyday life depend on highly complex and interconnected engineering systems; these systems consist of large number of interconnected sensors, actuators and system components. The study of interconnected systems plays a significant role in the study of reliability theory of dynamic systems, as it allows one to investigate the properties of an interconnected system by analyzing its less complicated subcomponents. Fault diagnosis is crucial in achieving safe and reliable operations of interconnected control systems. In all situations, the global system and/or each subsystem can be analyzed at different levels in investigating the reliability of the overall system; where different levels mean from system level down to the subcomponent level. In some cases, it is important to determine the abnormal information of the internal variables of local subsystem, in order to isolate the causes that contribute to the anomalous operation of the overall process. For example, if a certain fault appears in an actuator, the origin of that malfunction can have different causes: zero deviation, leakage, clogging etc. These origins can be represented as root cause of an actuator fault. This thesis concerns with the challenges of applying system inverse theory and model based FDD techniques to handle the joint problem of fault diagnosis & root cause analysis (FD & RCA) locally and performance monitoring globally. By considering actuator as individual dynamic subsystem connected with process dynamic subsystem in cascade, we propose an interconnected nonlinear system structure. We then investigate the problem of left invertibility, fault observability and fault diagnosability of the interconnected system, forming a novel model based multilevel FD & RCA algorithm. This diagnostic algorithm enables individual component to monitor internal dynamics locally to improve plant efficiency and diagnose potential fault resources to locate malfunction when operation performance of global system degrades. Hence, a means of acombination of local intelligence with a more advanceddiagnostic capability (combining fault monitoring anddiagnosis at both local and global levels) to performFDDfunctions on different levels of the plantis provided. As a result, improved fault localization and better predictive maintenance aids can be expected. The new system structure, together with the fault diagnosis algorithm, is the first to emphasize the importance of fault RCA of field devices, as well as the influences of local internal dynamics on the global dynamics. The developed model based multi-level FD & RCA algorithm is then a first effort to combine the strength of the system level model based fault diagnosis with the component level model based fault diagnosis. The contributions of this thesis include the following: Firstly, we propose a left invertible interconnected nonlinear system structure which guarantees that fault occurred in field device subsystem will affect the measured output of the global system uniquely and distinguishably. A necessary and sufficient condition is developed to ensure invertibility of the interconnected system which requires invertibility of individual subsystems. Second, a two level interconnected observer is developed which consists of two state estimators, aims at providing accurately estimates of states of each subsystem, as well as the unknown interconnection. In addition, it will also provide initial condition for the input reconstructor and local fault filter once FD & RCA procedure is triggered by any fault. Two underlyingissues are worth to be highlighted: for one hand, the measurement used in the estimator of the former subsystem is assumed not accessible; the solution is to replace it by the estimate provided by the estimator of the latter subsystem. In fact, this unknown output is the unknown interconnection of the interconnected system, and also the input of the latter subsystem
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Sagha, Hossein. "Development of innovative robust stability enhancement algorithms for distribution systems containing distributed generators". Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/91052/1/Hossein_Sagha_Thesis.pdf.

Texto completo da fonte
Resumo:
This project was a step forward in improving the voltage profile of traditional low voltage distribution networks with high photovoltaic generation or high peak demand. As a practical and economical solution, the developed methods use a Dynamic Voltage Restorer or DVR, which is a series voltage compensator, for continuous and communication-less power quality enhancement. The placement of DVR in the network is optimised in order to minimise its power rating and cost. In addition, new approaches were developed for grid synchronisation and control of DVR which are integrated with the voltage quality improvement algorithm for stable operation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Uguen, Yohann. "High-level synthesis and arithmetic optimizations". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI099.

Texto completo da fonte
Resumo:
À cause de la nature relativement jeune des outils de synthèse de haut-niveau (HLS), de nombreuses optimisations arithmétiques n'y sont pas encore implémentées. Cette thèse propose des optimisations arithmétiques se servant du contexte spécifique dans lequel les opérateurs sont instanciés. Certaines optimisations sont de simples spécialisations d'opérateurs, respectant la sémantique du C. D'autres nécessitent de s'éloigner de cette sémantique pour améliorer le compromis précision/coût/performance. Cette proposition est démontrée sur des sommes de produits de nombres flottants. La somme est réalisée dans un format en virgule-fixe défini par son contexte. Quand trop peu d’informations sont disponibles pour définir ce format en virgule-fixe, une stratégie est de générer un accumulateur couvrant l'intégralité du format flottant. Cette thèse explore plusieurs implémentations d'un tel accumulateur. L'utilisation d'une représentation en complément à deux permet de réduire le chemin critique de la boucle d'accumulation, ainsi que la quantité de ressources utilisées. Un format alternatif aux nombres flottants, appelé posit, propose d'utiliser un encodage à précision variable. De plus, ce format est augmenté par un accumulateur exact. Pour évaluer précisément le coût matériel de ce format, cette thèse présente des architectures d'opérateurs posits, implémentés avec le même degré d'optimisation que celui de l'état de l'art des opérateurs flottants. Une analyse détaillée montre que le coût des opérateurs posits est malgré tout bien plus élevé que celui de leurs équivalents flottants. Enfin, cette thèse présente une couche de compatibilité entre outils de HLS, permettant de viser plusieurs outils avec un seul code. Cette bibliothèque implémente un type d'entiers de taille variable, avec de plus une sémantique strictement typée, ainsi qu'un ensemble d'opérateurs ad-hoc optimisés
High-level synthesis (HLS) tools offer increased productivity regarding FPGA programming. However, due to their relatively young nature, they still lack many arithmetic optimizations. This thesis proposes safe arithmetic optimizations that should always be applied. These optimizations are simple operator specializations, following the C semantic. Other require to a lift the semantic embedded in high-level input program languages, which are inherited from software programming, for an improved accuracy/cost/performance ratio. To demonstrate this claim, the sum-of-product of floating-point numbers is used as a case study. The sum is performed on a fixed-point format, which is tailored to the application, according to the context in which the operator is instantiated. In some cases, there is not enough information about the input data to tailor the fixed-point accumulator. The fall-back strategy used in this thesis is to generate an accumulator covering the entire floating-point range. This thesis explores different strategies for implementing such a large accumulator, including new ones. The use of a 2's complement representation instead of a sign+magnitude is demonstrated to save resources and to reduce the accumulation loop delay. Based on a tapered precision scheme and an exact accumulator, the posit number systems claims to be a candidate to replace the IEEE floating-point format. A throughout analysis of posit operators is performed, using the same level of hardware optimization as state-of-the-art floating-point operators. Their cost remains much higher that their floating-point counterparts in terms of resource usage and performance. Finally, this thesis presents a compatibility layer for HLS tools that allows one code to be deployed on multiple tools. This library implements a strongly typed custom size integer type along side a set of optimized custom operators
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Vestin, Albin, e Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms". Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Texto completo da fonte
Resumo:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Gomes, Vanessa Limeira Azevedo. "Modelagem e previs?o da perda de injetividade em po?os canhoneados". Universidade Federal do Rio Grande do Norte, 2010. http://repositorio.ufrn.br:8080/jspui/handle/123456789/12928.

Texto completo da fonte
Resumo:
Made available in DSpace on 2014-12-17T14:08:40Z (GMT). No. of bitstreams: 1 VanessaLAG_DISSERT.pdf: 1481281 bytes, checksum: 8b61d326c9b0fb24441950affcfa1205 (MD5) Previous issue date: 2010-08-20
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Waterflooding is a technique largely applied in the oil industry. The injected water displaces oil to the producer wells and avoid reservoir pressure decline. However, suspended particles in the injected water may cause plugging of pore throats causing formation damage (permeability reduction) and injectivity decline during waterflooding. When injectivity decline occurs it is necessary to increase the injection pressure in order to maintain water flow injection. Therefore, a reliable prediction of injectivity decline is essential in waterflooding projects. In this dissertation, a simulator based on the traditional porous medium filtration model (including deep bed filtration and external filter cake formation) was developed and applied to predict injectivity decline in perforated wells (this prediction was made from history data). Experimental modeling and injectivity decline in open-hole wells is also discussed. The injectivity of modeling showed good agreement with field data, which can be used to support plan stimulation injection wells
A inje??o de ?gua ? uma t?cnica amplamente utilizada para deslocar o ?leo em dire??o aos po?os produtores e manter a press?o em reservat?rios de petr?leo. Entretanto, part?culas suspensas na ?gua injetada podem ser retidas no meio poroso, causando dano ? forma??o (redu??o de permeabilidade) e perda de injetividade. Quando ocorre essa redu??o de injetividade ? necess?rio aumentar a press?o de inje??o para manter a vaz?o de ?gua injetada. Desse modo, a correta previs?o da perda de injetividade ? essencial em projetos de inje??o de ?gua. Neste trabalho, um simulador, baseado no modelo tradicional da filtra??o em meios porosos (incluindo filtra??o profunda e forma??o do reboco externo), foi desenvolvido e aplicado para prever a perda de injetividade em po?os canhoneados (tal previs?o foi feita a partir de dados de hist?rico). Al?m disso, tamb?m foi discutida a determina??o experimental dos coeficientes do modelo e a perda de injetividade em po?os abertos. A modelagem da injetividade apresentou bom ajuste aos dados de campo, podendo ser utilizada para auxiliar no planejamento de estimula??es de po?os injetores
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Hándlová, Barbora. "Obchodní centrum". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-391915.

Texto completo da fonte
Resumo:
The diploma thesis deals with design of steel structure of shopping centre, which is located in Zlín. The total ground plan‘s dimensions of the structure are 153 x 49 m. The maximum height of the building is 23,1 m. The structure of the shopping centre is formed by columns, primary and secondary beams. Floor structure is designed as composite steel and concrete structure. The roof structure, above the central part of the buildig is designed in two alternative versions. There is prepared statics calculation of the main load-bearing parts of the structure, including joints and details.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Лавриненко, Олександр Юрійович, Александр Юрьевич Лавриненко e Oleksandr Lavrynenko. "Методи підвищення ефективності семантичного кодування мовних сигналів". Thesis, Національний авіаційний університет, 2021. https://er.nau.edu.ua/handle/NAU/52212.

Texto completo da fonte
Resumo:
Дисертаційна робота присвячена вирішенню актуальної науково-практичної проблеми в телекомунікаційних системах, а саме підвищення пропускної здатності каналу передачі семантичних мовних даних за рахунок ефективного їх кодування, тобто формулюється питання підвищення ефективності семантичного кодування, а саме – з якою мінімальною швидкістю можливо кодувати семантичні ознаки мовних сигналів із заданою ймовірністю безпомилкового їх розпізнавання? Саме на це питання буде дана відповідь у даному науковому дослідженні, що є актуальною науково-технічною задачею враховуючи зростаючу тенденцію дистанційної взаємодії людей і роботизованої техніки за допомогою мови, де безпомилковість функціонування даного типу систем безпосередньо залежить від ефективності семантичного кодування мовних сигналів. У роботі досліджено відомий метод підвищення ефективності семантичного кодування мовних сигналів на основі мел-частотних кепстральних коефіцієнтів, який полягає в знаходженні середніх значень коефіцієнтів дискретного косинусного перетворення прологарифмованої енергії спектра дискретного перетворення Фур'є обробленого трикутним фільтром в мел-шкалі. Проблема полягає в тому, що представлений метод семантичного кодування мовних сигналів на основі мел-частотних кепстральних коефіцієнтів не дотримується умови адаптивності, тому було сформульовано основну наукову гіпотезу дослідження, яка полягає в тому що підвищити ефективність семантичного кодування мовних сигналів можливо за рахунок використання адаптивного емпіричного вейвлет-перетворення з подальшим застосуванням спектрального аналізу Гільберта. Під ефективністю кодування розуміється зниження швидкості передачі інформації із заданою ймовірністю безпомилкового розпізнавання семантичних ознак мовних сигналів, що дозволить значно знизити необхідну смугу пропускання, тим самим підвищуючи пропускну здатність каналу зв'язку. У процесі доведення сформульованої наукової гіпотези дослідження були отримані наступні результати: 1) вперше розроблено метод семантичного кодування мовних сигналів на основі емпіричного вейвлетперетворення, який відрізняється від існуючих методів побудовою множини адаптивних смугових вейвлет-фільтрів Мейера з подальшим застосуванням спектрального аналізу Гільберта для знаходження миттєвих амплітуд і частот функцій внутрішніх емпіричних мод, що дозволить визначити семантичні ознаки мовних сигналів та підвищити ефективність їх кодування; 2) вперше запропоновано використовувати метод адаптивного емпіричного вейвлет-перетворення в задачах кратномасштабного аналізу та семантичного кодування мовних сигналів, що дозволить підвищити ефективність спектрального аналізу за рахунок розкладання високочастотного мовного коливання на його низькочастотні складові, а саме внутрішні емпіричні моди; 3) отримав подальший розвиток метод семантичного кодування мовних сигналів на основі мел-частотних кепстральних коефіцієнтів, але з використанням базових принципів адаптивного спектрального аналізу за допомогою емпіричного вейвлет-перетворення, що підвищує ефективність даного методу.
The thesis is devoted to the solution of the actual scientific and practical problem in telecommunication systems, namely increasing the bandwidth of the semantic speech data transmission channel due to their efficient coding, that is the question of increasing the efficiency of semantic coding is formulated, namely – at what minimum speed it is possible to encode semantic features of speech signals with the set probability of their error-free recognition? It is on this question will be answered in this research, which is an urgent scientific and technical task given the growing trend of remote human interaction and robotic technology through speech, where the accurateness of this type of system directly depends on the effectiveness of semantic coding of speech signals. In the thesis the well-known method of increasing the efficiency of semantic coding of speech signals based on mel-frequency cepstral coefficients is investigated, which consists in finding the average values of the coefficients of the discrete cosine transformation of the prologarithmic energy of the spectrum of the discrete Fourier transform treated by a triangular filter in the mel-scale. The problem is that the presented method of semantic coding of speech signals based on mel-frequency cepstral coefficients does not meet the condition of adaptability, therefore the main scientific hypothesis of the study was formulated, which is that to increase the efficiency of semantic coding of speech signals is possible through the use of adaptive empirical wavelet transform followed by the use of Hilbert spectral analysis. Coding efficiency means a decrease in the rate of information transmission with a given probability of error-free recognition of semantic features of speech signals, which will significantly reduce the required passband, thereby increasing the bandwidth of the communication channel. In the process of proving the formulated scientific hypothesis of the study, the following results were obtained: 1) the first time the method of semantic coding of speech signals based on empirical wavelet transform is developed, which differs from existing methods by constructing a sets of adaptive bandpass wavelet-filters Meyer followed by the use of Hilbert spectral analysis for finding instantaneous amplitudes and frequencies of the functions of internal empirical modes, which will determine the semantic features of speech signals and increase the efficiency of their coding; 2) the first time it is proposed to use the method of adaptive empirical wavelet transform in problems of multiscale analysis and semantic coding of speech signals, which will increase the efficiency of spectral analysis due to the decomposition of high-frequency speech oscillations into its low-frequency components, namely internal empirical modes; 3) received further development the method of semantic coding of speech signals based on mel-frequency cepstral coefficients, but using the basic principles of adaptive spectral analysis with the application empirical wavelet transform, which increases the efficiency of this method. Conducted experimental research in the software environment MATLAB R2020b showed, that the developed method of semantic coding of speech signals based on empirical wavelet transform allows you to reduce the encoding speed from 320 to 192 bit/s and the required passband from 40 to 24 Hz with a probability of error-free recognition of about 0.96 (96%) and a signal-to-noise ratio of 48 dB, according to which its efficiency increases 1.6 times in contrast to the existing method. The results obtained in the thesis can be used to build systems for remote interaction of people and robotic equipment using speech technologies, such as speech recognition and synthesis, voice control of technical objects, low-speed encoding of speech information, voice translation from foreign languages, etc.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Tunková, Martina. "Městské lázně". Master's thesis, Vysoké učení technické v Brně. Fakulta architektury, 2010. http://www.nusl.cz/ntk/nusl-215713.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Shen, Kuan-Wei, e 沈冠緯. "Using a Hybrid of Interval Type-2 RFCMAC and Bilateral Filter for Satellite Image Dehazing". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/764ug2.

Texto completo da fonte
Resumo:
碩士
國立勤益科技大學
資訊工程系
105
With advances in technology, the development of Remote Sensing Satellite Image has been real-time and accurate to monitor the environment of the surface or prevent the inevitable disaster earlier. Owing to the changeable weather is just like clouds or haze constituted by atmospheric particles, then this phenomenon cause the low contrast presented in satellite image and lose many information on the surface of the earth. Therefore, in this paper we propose an issue for dehazing to single satellite image, which is to enhance the contrast of image and filter the haze that cover the location, then the losing information will be back. First, we use Interval Type-2 RFCMAC Model to estimate the initial transmission map of the image. When facing the problems of halo and color over saturation, we adopt the bilateral filter and the quadratic function nonlinear transformation step by step to refine the initial transmission map. At the atmospheric light estimation, we adopt the first 1% brightest area as the color vector of atmospheric light. Finally, we take the refined transmission map and atmospheric light as the two parameters of reconstruct image. The experiment result shows that the method of satellite image dehazing has an effective results in visibility details and color contrast of reconstruction image. Furthermore, in order to prove the effective results, we take the visual assessment and quantitative evaluation respectively to compare with other authors. After visual assessment and quantitative evaluation, we get the better result in visual and data indeed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia