Dissertations / Theses on the topic 'Error Transformations'

To see the other types of publications on this topic, follow the link: Error Transformations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'Error Transformations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gul, Yusuf. "Entanglement Transformations And Quantum Error Correction." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610773/index.pdf.

Full text
Abstract:
The main subject of this thesis is the investigation of the transformations of pure multipartite entangled states having Schmidt rank 2 by using only local operations assisted with classical communications (LOCC). A new parameterization is used for describing the entangled state of p particles distributed to p distant, spatially separated persons. Product, bipartite and truly multipartite states are identified in this new parametrization. Moreover, alternative parameterizations of local operations carried out by each party are provided. For the case of a deterministic transformation to a truly multipartite final state, one can find an analytic expression that determines whether such a transformation is possible. In this case, a chain of measurements by each party for carrying out the transformation is found. It can also be seen that, under deterministic LOCC transformations, there are some quantities that remain invariant. For the purpose of applying the results of this thesis in the context of the quantum information and computation, brief reviews of the entanglement purification, measurement based quantum computation and quantum codes are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Suh, Sangwook. "Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42716.

Full text
Abstract:
The purpose of this research is to present a low-power wireless communication receiver with an enhanced performance by relieving the system complexity and performance degradation imposed by a quantization process. With an overwhelming demand for more reliable communication systems, the complexity required for modern communication systems has been increased accordingly. A byproduct of this increase in complexity is a commensurate increase in power consumption of the systems. Since the Shannon's era, the main stream of the methodologies for promising the high reliability of communication systems has been based on the principle that the information signals flowing through the system are represented in digits. Consequently, the system itself has been heavily driven to be implemented with digital circuits, which is generally beneficial over analog implementations when digitally stored information is locally accessible, such as in memory systems. However, in communication systems, a receiver does not have a direct access to the originally transmitted information. Since the received signals from a noisy channel are already continuous values with continuous probability distributions, we suggest a mixed-signal system in which the received continuous signals are directly fed into the analog demodulator and the subsequent soft-decision Viterbi decoder without any quantization involved. In this way, we claim that redundant system complexity caused by the quantization process is eliminated, thus gives better power efficiency in wireless communication systems, especially for battery-powered mobile devices. This is also beneficial from a performance perspective, as it takes full advantage of the soft information flowing through the system.
APA, Harvard, Vancouver, ISO, and other styles
3

TAKEDA, Kazuya, Norihide KITAOKA, and Makoto SAKAI. "Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria." Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/14970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lau, Buon Kiong. "Applications of Adaptive Antennas in Third-Generation Mobile Communications Systems." Thesis, Curtin University, 2002. http://hdl.handle.net/20.500.11937/2019.

Full text
Abstract:
Adaptive antenna systems (AAS's) are traditionally of interest only in radar and sonar applications. However, since the onset of the explosive growth in demand for wireless communications during the 1990's, researchers are giving increasing attention to the use of AAS technology to overcome practical challenges in providing the service. The main benefit of the technology lies in its ability to exploit the spatial domain, on top of the temporal and frequency domains, to improve on transceiver performance. This thesis presents a unified study on two classes of preprocessing techniques for uniform circular arrays (UCA's). UCA's are of interest because of their natural ability to provide a full azimuth (i.e. 360') coverage found in typical scenarios for sensor array applications, such as radar, sonar and wireless communications. The two classes of preprocessing techniques studied are the Davies transformation and the interpolated array transformations. These techniques yield a mathematically more convenient form - the Vandermonde form - for the array steering vector via a linear transformation. The Vandermonde form is useful for different applications such as direction-of-arrival (DOA) estimation and optimum or minimum variance distortionless response (MVDR) beamforming in correlated signal environment and beampattem synthesis. A novel interpolated array transformation is proposed to overcome limitations in the existing interpolated array transformations. A disadvantage of the two classes of preprocessing techniques for UCA's with omnidirectional elements is the lack of robustness in the transformed array steering vector to array imperfections under certain conditions. In order to mitigate the robustness problem, optimisation problems are formulated to modify the transformation matrices.Suitable optimisation techniques are then applied to obtain more robust transformations. The improved transformations are shown to improve robustness but at the cost of larger transformation errors. The benefits of the robustification procedure are most apparent in DOA estimation. In addition to the algorithm level studies, the thesis also investigates the use of AAS technology with respect to two different third generation (3G) mobile communications systems: Enhanced Data rates for Global Evolution (EDGE) and Wideband Code Division Multiple Access (WCDMA). EDGE, or more generally GSM/EDGE Radio Access Network (GERAN), is the evolution of the widely successful GSM system to provide 3G mobile services in the existing radio spectrum. It builds on the TDMA technology of GSM and relies on improved coding and higher order modulation schemes to provide packet-based services at high data rates. WCDMA, on the other hand, is based on CDMA technology and is specially designed and streamlined for 3G mobile services. For WCDMA, a single-user approach to DOA estimation which utilises the user spreading code and the pulse-shaped chip waveform is proposed. It is shown that the proposed approach produces promising performance improvements. The studies with EDGE are concerned with the evaluation of a simple AAS at the system and link levels.Results from, the system and link level simulations are presented to demonstrate the effectiveness of AAS technology in the new mobile communications system. Finally, it is noted that the WCDMA and EDGE link level simulations employ the newly developed COST259 directional channel model, which is capable of producing accurate channel realisations of macrocell environments for the evaluation of AAS's.
APA, Harvard, Vancouver, ISO, and other styles
5

Lehmann, Rüdiger. "Ein automatisches Verfahren für geodätische Berechnungen." Hochschule für Technik und Wirtschaft Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-188715.

Full text
Abstract:
Dieses Manuskript entstand aus Vorlesungen über Geodätische Berechnungen an der Hochschule für Technik und Wirtschaft Dresden. Da diese Lehrveranstaltung im ersten oder zweiten Semester stattfindet, werden noch keine Methoden der höheren Mathematik benutzt. Das Themenspektrum beschränkt sich deshalb weitgehend auf elementare Berechnungen in der Ebene. Nur im Kapitel 7 kommen einige Methoden der Vektorrechnung zum Einsatz.
APA, Harvard, Vancouver, ISO, and other styles
6

Mannem, Narender Reddy. "Adaptive Data Rate Multicarrier Direct Sequence Spread Spectrum in Rayleigh Fading Channel." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125782227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lau, Buon Kiong. "Applications of Adaptive Antennas in Third-Generation Mobile Communications Systems." Curtin University of Technology, Australian Telecommunications Research Institute, 2002. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=12983.

Full text
Abstract:
Adaptive antenna systems (AAS's) are traditionally of interest only in radar and sonar applications. However, since the onset of the explosive growth in demand for wireless communications during the 1990's, researchers are giving increasing attention to the use of AAS technology to overcome practical challenges in providing the service. The main benefit of the technology lies in its ability to exploit the spatial domain, on top of the temporal and frequency domains, to improve on transceiver performance. This thesis presents a unified study on two classes of preprocessing techniques for uniform circular arrays (UCA's). UCA's are of interest because of their natural ability to provide a full azimuth (i.e. 360') coverage found in typical scenarios for sensor array applications, such as radar, sonar and wireless communications. The two classes of preprocessing techniques studied are the Davies transformation and the interpolated array transformations. These techniques yield a mathematically more convenient form - the Vandermonde form - for the array steering vector via a linear transformation. The Vandermonde form is useful for different applications such as direction-of-arrival (DOA) estimation and optimum or minimum variance distortionless response (MVDR) beamforming in correlated signal environment and beampattem synthesis. A novel interpolated array transformation is proposed to overcome limitations in the existing interpolated array transformations. A disadvantage of the two classes of preprocessing techniques for UCA's with omnidirectional elements is the lack of robustness in the transformed array steering vector to array imperfections under certain conditions. In order to mitigate the robustness problem, optimisation problems are formulated to modify the transformation matrices.
Suitable optimisation techniques are then applied to obtain more robust transformations. The improved transformations are shown to improve robustness but at the cost of larger transformation errors. The benefits of the robustification procedure are most apparent in DOA estimation. In addition to the algorithm level studies, the thesis also investigates the use of AAS technology with respect to two different third generation (3G) mobile communications systems: Enhanced Data rates for Global Evolution (EDGE) and Wideband Code Division Multiple Access (WCDMA). EDGE, or more generally GSM/EDGE Radio Access Network (GERAN), is the evolution of the widely successful GSM system to provide 3G mobile services in the existing radio spectrum. It builds on the TDMA technology of GSM and relies on improved coding and higher order modulation schemes to provide packet-based services at high data rates. WCDMA, on the other hand, is based on CDMA technology and is specially designed and streamlined for 3G mobile services. For WCDMA, a single-user approach to DOA estimation which utilises the user spreading code and the pulse-shaped chip waveform is proposed. It is shown that the proposed approach produces promising performance improvements. The studies with EDGE are concerned with the evaluation of a simple AAS at the system and link levels.
Results from, the system and link level simulations are presented to demonstrate the effectiveness of AAS technology in the new mobile communications system. Finally, it is noted that the WCDMA and EDGE link level simulations employ the newly developed COST259 directional channel model, which is capable of producing accurate channel realisations of macrocell environments for the evaluation of AAS's.
APA, Harvard, Vancouver, ISO, and other styles
8

Halbach, Till. "Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-136.

Full text
Abstract:

This dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.

In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.

The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.

A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin.

Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection

APA, Harvard, Vancouver, ISO, and other styles
9

Nestler, Franziska. "Automated Parameter Tuning based on RMS Errors for nonequispaced FFTs." Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-160989.

Full text
Abstract:
In this paper we study the error behavior of the well known fast Fourier transform for nonequispaced data (NFFT) with respect to the L2-norm. We compare the arising errors for different window functions and show that the accuracy of the algorithm can be significantly improved by modifying the shape of the window function. Based on the considered error estimates for different window functions we are able to state an easy and efficient method to tune the involved parameters automatically. The numerical examples show that the optimal parameters depend on the given Fourier coefficients, which are assumed not to be of a random structure or roughly of the same magnitude but rather subject to a certain decrease.
APA, Harvard, Vancouver, ISO, and other styles
10

Dridi, Marwa. "Sur les méthodes rapides de résolution de systèmes de Toeplitz bandes." Thesis, Littoral, 2016. http://www.theses.fr/2016DUNK0402/document.

Full text
Abstract:
Cette thèse vise à la conception de nouveaux algorithmes rapides en calcul numérique via les matrices de Toeplitz. Tout d'abord, nous avons introduit un algorithme rapide sur le calcul de l'inverse d'une matrice triangulaire de Toeplitz en se basant sur des notions d'interpolation polynomiale. Cet algorithme nécessitant uniquement deux FFT(2n) est manifestement efficace par rapport à ses prédécésseurs. ensuite, nous avons introduit un algorithme rapide pour la résolution d'un système linéaire de Toeplitz bande. Cette approche est basée sur l'extension de la matrice donnée par plusieurs lignes en dessus, de plusieurs colonnes à droite et d'attribuer des zéros et des constantes non nulles dans chacune de ces lignes et de ces colonnes de telle façon que la matrice augmentée à la structure d'une matrice triangulaire inférieure de Toeplitz. La stabilité de l'algorithme a été discutée et son efficacité a été aussi justifiée. Finalement, nous avons abordé la résolution d'un système de Toeplitz bandes par blocs bandes de Toeplitz. Ceci étant primordial pour établir la connexion de nos algorithmes à des applications en restauration d'images, un domaine phare en mathématiques appliquées
This thesis aims to design new fast algorithms for numerical computation via the Toeplitz matrices. First, we introduced a fast algorithm to compute the inverse of a triangular Toeplitz matrix with real and/or complex numbers based on polynomial interpolation techniques. This algorithm requires only two FFT (2n) is clearly effective compared to predecessors. A numerical accuracy and error analysis is also considered. Numerical examples are given to illustrate the effectiveness of our method. In addition, we introduced a fast algorithm for solving a linear banded Toeplitz system. This new approach is based on extending the given matrix with several rows on the top and several columns on the right and to assign zeros and some nonzero constants in each of these rows and columns in such a way that the augmented matrix has a lower triangular Toeplitz structure. Stability of the algorithm is discussed and its performance is showed by numerical experiments. This is essential to connect our algorithms to applications such as image restoration applications, a key area in applied mathematics
APA, Harvard, Vancouver, ISO, and other styles
11

Pospíšilík, Oldřich. "Standardy a kódování zdrojového kódu PHP." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-237471.

Full text
Abstract:
This master's thesis deals with the methodology of writing the source code and their impact on the effectiveness of programming. Furthermore, the possibility of error detection patterns in the source code of PHP. Specifically, it addressed the possibility of integration tools for static analysis of the working group. The working group was elected by supervisor Ing. Michael Jurosz, which is in charge of the development and expansion of the Internet Information System Technical University of Brno. The works are given the best tools for static analysis of the PHP language. After evaluation and subsequent selection of tools and the procedure is further analysis and informal specifications tools. The following is a detailed proposal, a description of the implementation and integration .. In conclusion, we find an assessment of the whole of this work, added value for working team and the continuation of development tool.
APA, Harvard, Vancouver, ISO, and other styles
12

Qureshi, Muhammad Ayyaz [Verfasser], Thomas [Akademischer Betreuer] Eibert, and Hendrik [Akademischer Betreuer] Rogier. "Near-Field Error Analysis and Efficient Sampling Techniques for the Fast Irregular Antenna Field Transformation Algorithm / Muhammad Ayyaz Qureshi. Gutachter: Thomas Eibert ; Hendrik Rogier. Betreuer: Thomas Eibert." München : Universitätsbibliothek der TU München, 2013. http://d-nb.info/1045345717/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Damouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods." Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.

Full text
Abstract:
Les systèmes critiques basés sur l’arithmétique flottante exigent un processus rigoureux de vérification et de validation pour augmenter notre confiance en leur sureté et leur fiabilité. Malheureusement, les techniques existentes fournissent souvent une surestimation d’erreurs d’arrondi. Nous citons Arian 5 et le missile Patriot comme fameux exemples de désastres causés par les erreurs de calculs. Ces dernières années, plusieurs techniques concernant la transformation d’expressions arithmétiques pour améliorer la précision numérique ont été proposées. Dans ce travail, nous allons une étape plus loin en transformant automatiquement non seulement des expressions arithmétiques mais des programmes complets contenant des affectations, des structures de contrôle et des fonctions. Nous définissons un ensemble de règles de transformation permettant la génération, sous certaines conditions et en un temps polynômial, des expressions pluslarges en appliquant des calculs formels limités, au sein de plusieurs itérations d’une boucle. Par la suite, ces larges expressions sont re-parenthésées pour trouver la meilleure expression améliorant ainsi la précision numérique des calculs de programmes. Notre approche se base sur les techniques d’analyse statique par interprétation abstraite pour sur-rapprocher les erreurs d’arrondi dans les programmes et au moment de la transformation des expressions. Cette approche est implémenté dans notre outil et des résultats expérimentaux sur des algorithmes numériques classiques et des programmes venant du monde d’embarqués sont présentés
Critical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
APA, Harvard, Vancouver, ISO, and other styles
14

Vacher, André. "Calcul cablé d'une transformée de Fourier à très grand nombre d'échantillons, éventuellement multidimensionnelle." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0020.

Full text
Abstract:
Le calcul cable d'une transformee de fourier permet d'accelerer tres fortement son calcul. Des applications militaires ont vu des solutions pour de faibles nombres d'echantillons et avec des precisions limitees. Repousser ces barrieres demande de diminuer la surface d'implantation. Un grand nombre de cellules de calcul, les papillons, utilisant des operateurs seriels et travaillant en parallele permet d'obtenir une meilleure precision et une forte vitesse. Le surcout en surface a ete verifie au cours d'une implantation presentee avec ses perspectives. Une solution multipuce impose le choix d'une architecture a deux niveaux, papillons seriels et bus de communication paralleles, dont l'un est privilegie au niveau taux d'utilisation et frequence de travail. La precision est fonction de celles des donnees originales et du nombre d'etapes, donc d'echantillons. Des operateurs a taille variable permettent de jouer sur la precision et la surface ou la vitesse selon le nombre de barettes de papillons implantees. Les parametres des operateurs optimisent l'architecture d'une transformee de fourier pour une decomposition donnee de celle-ci. Les bases 2 et 4 sont les seules reellement utilisees pour la decomposition au niveau du calcul. L'estimation de la surface et du temps de calcul demontre un gain pour des solutions cablees pour les bases 8 et 12. Les transformees multidimensionnelles presentent un phenomene d'erreur plus faible, a nombre total d'echantillons egal, en raison du plus grand nombre de coefficients exponentiels simples. Celles-ci sont la cible des applications civiles a grand nombre d'echantillons, imagerie ou donnees dans l'espace. La methode cristallographique en fait partie, avec en plus la presence de nombreux echantillons a valeur nulle. Ce qui amene a etudier l'erreur dans le cas des matices creuses, pour utiliser dans certains cas des circuits existants au dela de leurs applications originales. Ces differentes voies permettent d'envisager le developpement d'architectures cablees pour les transformees de fourier a grand nombre d'echantillons, particulierement dans le cas de transformees multidimensionnelles.
APA, Harvard, Vancouver, ISO, and other styles
15

Jacobson, Craig. "INTERNATIONAL SPACE STATION REMOTE SENSING POINTING ANALYSIS." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3308.

Full text
Abstract:
This paper analyzes the geometric and disturbance aspects of utilizing the International Space Station for remote sensing of earth targets. The proposed instrument is SHORE (Station High-Sensitivity Ocean Research Experiment), a multi-band optical spectrometer with 15 m pixel resolution. The analysis investigates the contribution of the error effects to the quality of data collected by the instrument. The analysis begins with the discussion of the coordinate systems involved and then conversion from the target coordinate system to the instrument coordinate system. Next the geometry of remote observations from the Space Station is investigated including the effects of the instrument location in Space Station and the effects of the line of sight to the target. The disturbance and error environment on Space Station is discussed covering factors contributing to drift and jitter, accuracy of pointing data and target and instrument accuracies. Finally, there is a brief discussion of image processing to address any post error correction options.
M.S.A.E.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Aerospace Engineering
APA, Harvard, Vancouver, ISO, and other styles
16

Куц, Юрій Вікторович, and Yurii Kuts. "Метод підвищення точності вимірювань радіосигналів при адаптивній фільтрації." Master's thesis, ТНТУ ім. І. Пулюя, Факультет прикладних інформаційних технологій та електроінженерії, кафедра біотехнічних систем, 2021. http://elartu.tntu.edu.ua/handle/lib/36525.

Full text
Abstract:
В кваліфікаційній роботі розроблено алгоритм адаптивного пошуку оптимальних параметрів фільтра, зокрема, проведений аналіз досліджуваної системи на основі трьох вхідних сигналів: синусоїдальний, функція Хевісайду, імпульсний, та визначено що найбільш якісно та ефективно фільтрує сигнал фільтр Чебишева. Динамічна похибка із застосуванням цього фільтру зменшилася до 2 разів.
The qualification work developed an algorithm for adaptive search of optimal filter parameters, in particular, the analysis of the studied system based on three input signals: sinusoidal, Heaviside function, pulse, and determined that the most efficient and effective filtering signal Chebyshev filter. The dynamic error with this filter has been reduced up to 2 times.
ВСТУП 9 РОЗДІ 1. ОСНОВНА ЧАСТИНА 11 1.1. Загальна характеристика та визначення динамічної похибки 11 1.2. Спектральні методи фільтрації вимірювальних сигналів 15 1.2.1. Фільтрація вимірювальних сигналів методом поліноміальної ортогоналізації 15 1.2.2. Фільтрація вимірювальних сигналів формуванням рядів Фур'є 15 1.3. Екстремальний метод фільтрації вимірювальних сигналів 21 1.4. Метод введення в структуру коригувальних ланок 22 1.5. Висновки до розділу 1 23 РОЗДІЛ 2. ОСНОВНА ЧАСТИНА 24 2.1. Загальні уявлення про фільтри 24 2.2. Необхідність дискретних фільтрів 29 2.3. Обмеження точності дискретних фільтрів 30 2.4. КІХ-фільтри з лінійною фазово-частотною характеристикою 31 2.5. Проектування КІХ-фільтрів 31 2.6. Типи дискретних фільтрів 32 2.7. Порівняння між КІХ- та БІХ-фільтрами 34 2.8 Порівняння аналогових та дискретних фільтрів 34 2.9 Висновки до розділу 2 35 РОЗДІЛ 3. НАУКОВО-ДОСЛІДНА ЧАСТИНА 36 3.1. Постановка задачі 36 3.2. Використання програмного середовища MATLAB 37 3.3. Поняття адаптивного фільтра 38 3.4. Алгоритм корекції динамічної похибки 39 3.5. Розрахунок СКВ 39 3.6. Критерій мінімуму СКВ 41 3.7. Безперервна модель 42 3.8. Фільтрація сигналів 43 3.9. Математичне моделювання системи 47 3.10. Прямокутне вікно 48 3.10.1. Перевірка роботи вимірювальної системи на основі функції Хевісайду на вході ВП 48 3.10.2. Перевірка роботи вимірювальної системи на основі імпульсного сигналу на вході ВП 54 3.11. Трикутне вікно 60 3.11.1. Перевірка роботи вимірювальної системи на основі функції Хевісайду на вході ВП 60 3.11.2. Перевірка роботи вимірювальної системи на основі імпульсного сигналу на вході ВП 65 3.12. Висновки до розділу 3 73 РОЗДІЛ 4. ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 74 4.1. Охорона праці 74 4.2. Безпека в надзвичайних ситуаціях 76 4.3. Висновки до розділу 4 80 ВИСНОВКИ 81 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 82 Додаток А. Лістинг програми прямокутного вікна 84 Додаток Б. Лістинг програми трикутного вікна 86 Додаток В. Лістинг програми прямокутного вікна 88 Додаток Г. Копія тези конференції 90
APA, Harvard, Vancouver, ISO, and other styles
17

Nieuwoudt, Christoph. "Cross-language acoustic adaptation for automatic speech recognition." Thesis, Pretoria : [s.n.], 2000. http://upetd.up.ac.za/thesis/available/etd-01062005-071829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pippig, Michael. "Massively Parallel, Fast Fourier Transforms and Particle-Mesh Methods: Massiv parallele schnelle Fourier-Transformationen und Teilchen-Gitter-Methoden." Doctoral thesis, Universitätsverlag der Technischen Universität Chemnitz, 2015. https://monarch.qucosa.de/id/qucosa%3A20398.

Full text
Abstract:
The present thesis provides a modularized view on the structure of fast numerical methods for computing Coulomb interactions between charged particles in three-dimensional space. Thereby, the common structure is given in terms of three self-contained algorithmic frameworks that are built on top of each other, namely fast Fourier transform (FFT), nonequispaced fast Fourier transform (NFFT) and NFFT based particle-mesh methods (P²NFFT). For each of these frameworks algorithmic enhancement and parallel implementations are presented with special emphasis on scalability up to hundreds of thousands of parallel processes. In the context of FFT massively parallel algorithms are composed from hardware adaptive low level modules provided by the FFTW software library. The new algorithmic NFFT concepts include pruned NFFT, interlacing, analytic differentiation, and optimized deconvolution in Fourier space with respect to a mean square aliasing error. Enabled by these generalized concepts it is shown that NFFT provides a unified access to particle-mesh methods. Especially, mixed-periodic boundary conditions are handled in a consistent way and interlacing can be incorporated more efficiently. Heuristic approaches for parameter tuning are presented on the basis of thorough error estimates.
Die vorliegende Dissertation beschreibt einen modularisierten Blick auf die Struktur schneller numerischer Methoden für die Berechnung der Coulomb-Wechselwirkungen zwischen Ladungen im dreidimensionalen Raum. Die gemeinsame Struktur ist geprägt durch drei selbstständige und auf einander aufbauenden Algorithmen, nämlich der schnellen Fourier-Transformation (FFT), der nicht äquidistanten schnellen Fourier-Transformation (NFFT) und der NFFT-basierten Teilchen-Gitter-Methode (P²NFFT). Für jeden dieser Algorithmen werden Verbesserungen und parallele Implementierungen vorgestellt mit besonderem Augenmerk auf massiv paralleler Skalierbarkeit. Im Kontext der FFT werden parallele Algorithmen aus den Hardware adaptiven Modulen der FFTW Softwarebibliothek zusammengesetzt. Die neuen NFFT-Konzepte beinhalten abgeschnittene NFFT, Versatz, analytische Differentiation und optimierte Entfaltung im Fourier-Raum bezüglich des mittleren quadratischen Aliasfehlers. Mit Hilfe dieser Verallgemeinerungen bietet die NFFT einen vereinheitlichten Zugang zu Teilchen-Gitter-Methoden. Insbesondere gemischt periodische Randbedingungen werden einheitlich behandelt und Versatz wird effizienter umgesetzt. Heuristiken für die Parameterwahl werden auf Basis sorgfältiger Fehlerabschätzungen angegeben.
APA, Harvard, Vancouver, ISO, and other styles
19

Sundström, David. "On specification and inference in the econometrics of public procurement." Doctoral thesis, Umeå universitet, Nationalekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-121681.

Full text
Abstract:
In Paper [I] we use data on Swedish public procurement auctions for internal regularcleaning service contracts to provide novel empirical evidence regarding green publicprocurement (GPP) and its effect on the potential suppliers’ decision to submit a bid andtheir probability of being qualified for supplier selection. We find only a weak effect onsupplier behavior which suggests that GPP does not live up to its political expectations.However, several environmental criteria appear to be associated with increased complexity,as indicated by the reduced probability of a bid being qualified in the postqualificationprocess. As such, GPP appears to have limited or no potential to function as an environmentalpolicy instrument. In Paper [II] the observation is made that empirical evaluations of the effect of policiestransmitted through public procurements on bid sizes are made using linear regressionsor by more involved non-linear structural models. The aspiration is typically to determinea marginal effect. Here, I compare marginal effects generated under both types ofspecifications. I study how a political initiative to make firms less environmentally damagingimplemented through public procurement influences Swedish firms’ behavior. Thecollected evidence brings about a statistically as well as economically significant effect onfirms’ bids and costs. Paper [III] embarks by noting that auction theory suggests that as the number of bidders(competition) increases, the sizes of the participants’ bids decrease. An issue in theempirical literature on auctions is which measurement(s) of competition to use. Utilizinga dataset on public procurements containing measurements on both the actual and potentialnumber of bidders I find that a workhorse model of public procurements is bestfitted to data using only actual bidders as measurement for competition. Acknowledgingthat all measurements of competition may be erroneous, I propose an instrumental variableestimator that (given my data) brings about a competition effect bounded by thosegenerated by specifications using the actual and potential number of bidders, respectively.Also, some asymptotic results are provided for non-linear least squares estimatorsobtained from a dependent variable transformation model. Paper [VI] introduces a novel method to measure bidders’ costs (valuations) in descending(ascending) auctions. Based on two bounded rationality constraints bidders’costs (valuations) are given an imperfect measurements interpretation robust to behavioraldeviations from traditional rationality assumptions. Theory provides no guidanceas to the shape of the cost (valuation) distributions while empirical evidence suggeststhem to be positively skew. Consequently, a flexible distribution is employed in an imperfectmeasurements framework. An illustration of the proposed method on Swedishpublic procurement data is provided along with a comparison to a traditional BayesianNash Equilibrium approach.
APA, Harvard, Vancouver, ISO, and other styles
20

Raillon, Loic. "Experimental identification of physical thermal models for demand response and performance evaluation." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI039.

Full text
Abstract:
La stratégie de l’Union Européenne pour atteindre les objectifs climatiques, est d’augmenter progressivement la part d’énergies renouvelables dans le mix énergétique et d’utiliser l’énergie plus efficacement de la production à la consommation finale. Cela implique de mesurer les performances énergétiques du bâtiment et des systèmes associés, indépendamment des conditions climatiques et de l’usage, pour fournir des solutions efficaces et adaptées de rénovation. Cela implique également de connaître la demande énergétique pour anticiper la production et le stockage d’énergie (mécanismes de demande et réponse). L’estimation des besoins énergétiques et des performances énergétiques des bâtiments ont un verrou scientifique commun : l’identification expérimentale d’un modèle physique du comportement intrinsèque du bâtiment. Les modèles boîte grise, déterminés d’après des lois physiques et les modèles boîte noire, déterminés heuristiquement, peuvent représenter un même système physique. Des relations entre les paramètres physiques et heuristiques existent si la structure de la boîte noire est choisie de sorte qu’elle corresponde à la structure physique. Pour trouver la meilleure représentation, nous proposons d’utiliser, des simulations de Monte Carlo pour analyser la propagation des erreurs dans les différentes transformations de modèle et, une méthode de priorisation pour classer l’influence des paramètres. Les résultats obtenus indiquent qu’il est préférable d’identifier les paramètres physiques. Néanmoins, les informations physiques, déterminées depuis l’estimation des paramètres, sont fiables si la structure est inversible et si la quantité d’information dans les données est suffisante. Nous montrons comment une structure de modèle identifiable peut être choisie, notamment grâce au profil de vraisemblance. L’identification expérimentale comporte trois phases : la sélection, la calibration et la validation du modèle. Ces trois phases sont détaillées dans le cas d’une expérimentation d’une maison réelle en utilisant une approche fréquentiste et Bayésienne. Plus précisément, nous proposons une méthode efficace de calibration Bayésienne pour estimer la distribution postérieure des paramètres et ainsi réaliser des simulations en tenant compte de toute les incertitudes, ce qui représente un atout pour le contrôle prédictif. Nous avons également étudié les capacités des méthodes séquentielles de Monte Carlo pour estimer simultanément les états et les paramètres d’un système. Une adaptation de la méthode de prédiction d’erreur récursive, dans une stratégie séquentielle de Monte Carlo, est proposée et comparée à une méthode de la littérature. Les méthodes séquentielles peuvent être utilisées pour identifier un premier modèle et fournir des informations sur la structure du modèle sélectionnée pendant que les données sont collectées. Par la suite, le modèle peut être amélioré si besoin, en utilisant le jeu de données et une méthode itérative
The European Union strategy for achieving the climate targets, is to progressively increase the share of renewable energy in the energy mix and to use the energy more efficiently from production to final consumption. It requires to measure the energy performance of buildings and associated systems, independently of weather conditions and user behavior, to provide efficient and adapted retrofitting solutions. It also requires to known the energy demand to anticipate the energy production and storage (demand response). The estimation of building energy demand and the estimation of energy performance of buildings have a common scientific: the experimental identification of the physical model of the building’s intrinsic behavior. Grey box models, determined from first principles, and black box models, determined heuristically, can describe the same physical process. Relations between the physical and mathematical parameters exist if the black box structure is chosen such that it matches the physical ones. To find the best model representation, we propose to use, Monte Carlo simulations for analyzing the propagation of errors in the different model transformations, and factor prioritization, for ranking the parameters according to their influence. The obtained results show that identifying the parameters on the state-space representation is a better choice. Nonetheless, physical information determined from the estimated parameters, are reliable if the model structure is invertible and the data are informative enough. We show how an identifiable model structure can be chosen, especially thanks to profile likelihood. Experimental identification consists of three phases: model selection, identification and validation. These three phases are detailed on a real house experiment by using a frequentist and Bayesian framework. More specifically, we proposed an efficient Bayesian calibration to estimate the parameter posterior distributions, which allows to simulate by taking all the uncertainties into account, which is suitable for model predictive control. We have also studied the capabilities of sequential Monte Carlo methods for estimating simultaneously the states and parameters. An adaptation of the recursive prediction error method into a sequential Monte Carlo framework, is proposed and compared to a method from the literature. Sequential methods can be used to provide a first model fit and insights on the selected model structure while the data are collected. Afterwards, the first model fit can be refined if necessary, by using iterative methods with the batch of data
APA, Harvard, Vancouver, ISO, and other styles
21

Pippig, Michael. "Massively Parallel, Fast Fourier Transforms and Particle-Mesh Methods." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-197359.

Full text
Abstract:
The present thesis provides a modularized view on the structure of fast numerical methods for computing Coulomb interactions between charged particles in three-dimensional space. Thereby, the common structure is given in terms of three self-contained algorithmic frameworks that are built on top of each other, namely fast Fourier transform (FFT), nonequispaced fast Fourier transform (NFFT) and NFFT based particle-mesh methods (P²NFFT). For each of these frameworks algorithmic enhancement and parallel implementations are presented with special emphasis on scalability up to hundreds of thousands of parallel processes. In the context of FFT massively parallel algorithms are composed from hardware adaptive low level modules provided by the FFTW software library. The new algorithmic NFFT concepts include pruned NFFT, interlacing, analytic differentiation, and optimized deconvolution in Fourier space with respect to a mean square aliasing error. Enabled by these generalized concepts it is shown that NFFT provides a unified access to particle-mesh methods. Especially, mixed-periodic boundary conditions are handled in a consistent way and interlacing can be incorporated more efficiently. Heuristic approaches for parameter tuning are presented on the basis of thorough error estimates
Die vorliegende Dissertation beschreibt einen modularisierten Blick auf die Struktur schneller numerischer Methoden für die Berechnung der Coulomb-Wechselwirkungen zwischen Ladungen im dreidimensionalen Raum. Die gemeinsame Struktur ist geprägt durch drei selbstständige und auf einander aufbauenden Algorithmen, nämlich der schnellen Fourier-Transformation (FFT), der nicht äquidistanten schnellen Fourier-Transformation (NFFT) und der NFFT-basierten Teilchen-Gitter-Methode (P²NFFT). Für jeden dieser Algorithmen werden Verbesserungen und parallele Implementierungen vorgestellt mit besonderem Augenmerk auf massiv paralleler Skalierbarkeit. Im Kontext der FFT werden parallele Algorithmen aus den Hardware adaptiven Modulen der FFTW Softwarebibliothek zusammengesetzt. Die neuen NFFT-Konzepte beinhalten abgeschnittene NFFT, Versatz, analytische Differentiation und optimierte Entfaltung im Fourier-Raum bezüglich des mittleren quadratischen Aliasfehlers. Mit Hilfe dieser Verallgemeinerungen bietet die NFFT einen vereinheitlichten Zugang zu Teilchen-Gitter-Methoden. Insbesondere gemischt periodische Randbedingungen werden einheitlich behandelt und Versatz wird effizienter umgesetzt. Heuristiken für die Parameterwahl werden auf Basis sorgfältiger Fehlerabschätzungen angegeben
APA, Harvard, Vancouver, ISO, and other styles
22

Kelly, Jodie. "Topics in the statistical analysis of positive and survival data." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nestler, Franziska. "Efficient Computation of Electrostatic Interactions in Particle Systems Based on Nonequispaced Fast Fourier Transforms." Universitätsverlag der Technischen Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A23376.

Full text
Abstract:
The present thesis is dedicated to the efficient computation of electrostatic interactions in particle systems, which is of great importance in the field of molecular dynamics simulations. In order to compute the therefor required physical quantities with only O(N log N) arithmetic operations, so called particle-mesh methods make use of the well-known Ewald summation approach and the fast Fourier transform (FFT). Typically, such methods are able to handle systems of point charges subject to periodic boundary conditions in all spatial directions. However, periodicity is not always desired in all three dimensions and, moreover, also interactions to dipoles play an important role in many applications. Within the scope of the present work, we consider the particle-particle NFFT method (P²NFFT), a particle-mesh approach based on the fast Fourier transform for nonequispaced data (NFFT). An extension of this method for mixed periodic as well as open boundary conditions is presented. Furthermore, the method is appropriately modified in order to treat particle systems containing both charges and dipoles. Consequently, an efficient algorithm for mixed charge-dipole systems, that additionally allows a unified handling of various types of periodic boundary conditions, is presented for the first time. Appropriate error estimates as well as parameter tuning strategies are developed and verified by numerical examples.
Die vorliegende Arbeit widmet sich der Berechnung elektrostatischer Wechselwirkungen in Partikelsystemen, was beispielsweise im Bereich der molekulardynamischen Simulationen eine zentrale Rolle spielt. Um die dafür benötigten physikalischen Größen mit lediglich O(N log N) arithmetischen Operationen zu berechnen, nutzen sogenannte Teilchen-Gitter-Methoden die Ewald-Summation sowie die schnelle Fourier-Transformation (FFT). Typischerweise können derartige Verfahren Systeme von Punktladungen unter periodischen Randbedingungen in allen Raumrichtungen handhaben. Periodizität ist jedoch nicht immer bezüglich aller drei Dimensionen erwünscht. Des Weiteren spielen auch Wechselwirkungen zu Dipolen in vielen Anwendungen eine wichtige Rolle. Zentraler Gegenstand dieser Arbeit ist die Partikel-Partikel-NFFT Methode (P²NFFT), ein Teilchen-Gitter-Verfahren, welches auf der schnellen Fouriertransformation für nichtäquidistante Daten (NFFT) basiert. Eine Erweiterung dieses Verfahrens auf gemischt periodische sowie offene Randbedingungen wird vorgestellt. Außerdem wird die Methode für die Behandlung von Partikelsystemen, in denen sowohl Ladungen als auch Dipole vorliegen, angepasst. Somit wird erstmalig ein effizienter Algorithmus für gemischte Ladungs-Dipol-Systeme präsentiert, der zusätzlich die Behandlung sämtlicher Arten von Randbedingungen mit einem einheitlichen Zugang erlaubt. Entsprechende Fehlerabschätzungen sowie Strategien für die Parameterwahl werden entwickelt und anhand numerischer Beispiele verifiziert.
APA, Harvard, Vancouver, ISO, and other styles
24

Промович, Юрій Бориславович, Юрий Бориславович Промович, and Y. B. Promovych. "Математичне моделювання струму в об’єктах з неоднорідностями та методи їх біполярної електроімпедансної томоґрафії з підвищеною точністю." Thesis, Тернопільський національний технічний університет ім. Івана Пулюя, 2013. http://elartu.tntu.edu.ua/handle/123456789/2393.

Full text
Abstract:
Роботу виконано в Тернопільському національному технічному університеті імені Івана Пулюя, Міністерства освіти і науки України. Захист відбувся в 2013 р. в на засіданні спеціалізованої вченої ради К 58.052.01 в Тернопільському національному технічному університеті імені Івана Пулюя (46001, м. Тернопіль, вул. Руська, 56, ауд. 79). З дисертацією можна ознайомитися у науково-технічній бібліотеці Тернопільського національного технічного університету імені Івана Пулюя (46001, м. Тернопіль, вул. Руська, 56).
В дисертації розв’язано наукову задачу удосконалення математичної моделі траєкторій струму в м’яких тканинах з новоутвореннями для отримання достатньої точності реконструкції розподілу електричної провідності за даними біполярної ЕІТ. Для цього використано апріорні відомості про параметри тканин, а також введено поправку систематичної похибки вимірювання напруг. Встановлено, що відомі методи реконструкції розподілу провідності, які використовують зворотне проектування, не враховують взаємодії електричного струму з неоднорідним за провідністю середовищем. Для біполярної електроімпедансної томоґрафії побудовано метод реконструкції зображення, який полягає у зворотному проектуванні проекційних даних уздовж ліній максимальної густини електричного струму. Також побудовано модель систематичної похибки вимірювання електричного імпедансу томоґрафом для формування поправки, ефективність застосування якої підтверджена на реальних даних ТЕ. Метод реконструкції та модель систематичної похибки верифіковано з використанням імітаційної моделі та експериментального макета системи для електроімпедансної томоґрафії, побудованого на кафедрі «Біотехнічні системи» ТНТУ. Математичні моделі застосовано при побудові алґоритмів реконструкції, натурного та імітаційного моделювання ЕІТ.
В диссертации решено научную задачу усовершенствования математической модели траекторий тока в мягких тканях с новообразованиями с целью получения достаточной точности реконструкции распределения электрической плотности за данными биполярной электроимпедансной томографии (ЭИТ). Для этого использовано априорные данные о параметрах тканей, а также введено поправку систематической ошибки измерения напряжений. Установлено, что известные методы реконструкции, которые используют интегральные преобразования, не учитывают взаимодействия электрического тока с неоднородной за проводимостью средой. Для биполярной ЭИТ построено метод реконструкции изображения, в котором обратное проецирование осуществляется вдоль линий максимальной плотности электрического тока. Также построено математическую модель поправки систематической ошибки измерения электрического импеданса томографом для формирования поправки, эффективность использования которой подтверждена на реальных данных ТЭ. Для метода реконструкции и модели систематической ошибки провели верификацию с использованием компьютерной имитационной модели и экспериментального макета системы для ЭИТ, разработанного на кафедре «Биотехнические системы» ТНТУ. Математические модели использовано при построении алгоритмов реконструкции и имитационного моделирования ЭИТ.
The dissertation is focused on the improvement of methods and means of mathematical and computer modeling of image reconstruction in bipolar electrical impedance tomography (EIT). For a bipolar electrical impedance tomography the method of reconstruction of image is improved. This back projection along the lines of maximal electric current density method is used. The reconstruction method can be divided into three stages. The first stage of the method is the construction of the electric potential field for an empiric environment . Electric potential for the pair electrodes and is finded from the differential equation , , , , n – normal vector to boundary ; ( ) and ( ) - places of electrodes connected. On the second stage for every electrodes pair we build the line of the maximal electric current density. For the task of maximal current density line finding variations method was used. Along the maximal current density line in the area the power scattering is maximal and, assume, determine the difference potential between the electrodes pair. The realization of the third stage foresees the measured data filtration and back projection on an area . The mathematical model of an electrical impedance measurement systematic error of a tomograph is also worked up. The error of measurement in EIT contains the random and systematic components. The random component error by the insignificant electrode contact loss with the surface of a conducting body conditioned. A systematic error is the hardware features arrangement of a tomograph measurement transducer. As a rule every electrode to a measuring transducer of the impedance tomograph via one key such multiplexer is connected. When the resistance of a conducting body is approximately equal to resistance of a multiplexer open channel a substantial source of error is appear. The resistance of the opened channel of multiplexer is the source of the systematic error . The one realisation the tomographic experiment in the calibration mode as a is bounded stochastic sequence observable values of resistances -y pair of multiplexer keys ( ). The adequate model of signals from synchronous multiplexer systems is the stochastic sequence of class , which in the energy theory of casual signals. The estimation of the mathematical expectation of the stationary component and will be that functional for systematic error decreasing in tomographic experiment. Using of the energy theory of stochastic signals for the in-phase analysis of the ensemble of tomographic experiment realization a signal-error to build purpose as element of negative feedback for the input circuit of the impedance tomograph. Efficiency a mathematical model on the tomographic experiment (TE) real data confirms. For the method of reconstruction and model of systematic error verification with the use of imitation model and an experimental model system for electrical impedance tomography implemented. The model with a data as from a test conductivity distribution image of a flat section conducting body is used. The result of imitation design is a sequence of voltage falling values for each of formally certain pair of measuring electrodes. Experimental model system on a department "Biotechnical systems" of the Ternopil National Ivan Pul’uj Technical University designed. The constructed mathematical models for the realizing of the reconstruction algorithms and EIT imitation design are used.
APA, Harvard, Vancouver, ISO, and other styles
25

Helwig, Wolfram Hugo. "Multipartite Entanglement: Transformations, Quantum Secret Sharing, Quantum Error Correction." Thesis, 2014. http://hdl.handle.net/1807/44114.

Full text
Abstract:
Most applications in quantum information processing make either explicit or implicit use of entanglement. It is thus important to have a good understanding of entanglement and the role it plays in these protocols. However, especially when it comes to multipartite entanglement, there still remain a lot of mysteries. This thesis is devoted to getting a better understanding of multipartite entanglement, and its role in various quantum information protocols. First, we investigate transformations between multipartite entangled states that only use local operations and classical communication (LOCC). We mostly focus on three qubit states in the GHZ class, and derive upper and lower bounds for the successful transformation probability between two states. We then focus on absolutely maximally entangled (AME) states, which are highly entangled multipartite states that have the property that they are maximally entangled for any bipartition. With them as a resource, we develop new parallel teleportation protocols, which can then be used to implement quantum secret sharing (QSS) schemes. We further prove the existence of AME states for any number of parties, if the dimension of the involved quantum systems is chosen appropriately. An equivalence between threshold QSS schemes and AME states shared between an even number of parties is established, and further protocols are designed, such as constructing ramp QSS schemes and open-destination teleportation protocols with AME states as a resource. As a framework to work with AME states, graph states are explored. They allow for efficient bipartite entanglement verification, which makes them a promising candidate for the description of AME states. We show that for all currently known AME states, absolutely maximally entangled graph states can be found, and we were even able to use graph states to find a new AME state for seven three-dimensional systems (qutrits). In addition, the implementation of QSS schemes from AME states can be conveniently described within the graph state formalism. Finally, we use the insight gained from entanglement in QSS schemes to derive necessary and sufficient conditions for quantum erasure channel and quantum error correction codes that satisfy the quantum Singleton bound, as these codes are closely related to ramp QSS schemes. This provides us with a very intuitive approach to codes for the quantum erasure channel, purely based on the entanglement required to protect information against losses by use of the parallel teleportation protocol.
APA, Harvard, Vancouver, ISO, and other styles
26

"On general error cancellation based logic transformations: the theory and techniques." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075487.

Full text
Abstract:
Yang, Xiaoqing.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 113-120).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
27

Liang, Zhi-Hong, and 梁志鴻. "Design of Timing-Error-Tolerant Digital Filters for Various Filter Transformations." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/4c2ycw.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
102
In modern VLSI design, especially in system-on-chip, the number of transistors in a single chip keeps increasing thanks to the advance of chip manufacturing technology. However, as the feature size of modern chips shrinks, the circuits become more and more susceptible to noise, wire delay, and soft errors. One of these main problems is timing errors which are caused by process variation, device aging, etc. Such timing error problems can cause system failures. Hence, it is an important issue to solve the timing error problem while maintaining the performance of a chip. This thesis proposes various transformation designs for VLSI digital filters for tolerating multiple timing errors. We have developed a design methodology for VLSI digital filters, which can detect and tolerate multiple timing errors on-line. In order to achieve high performance of the digital filters, different transformations for various digital filter designs are applied. According to the design requirements, we choose the appropriate transformation for the filter in order to improve the performance, while it can still tolerate multiple timing errors. We have applied our techniques to two example digital signal filter designs, including a FIR filter and an IIR filter. Four examples for each circuit are studied and evaluated. We have implemented them using cell-based design flow on TSMC manufacturing technology. The implementation results show that our designs achieve high performance and tolerance of multiple timing errors for digital filters with reasonable cost.
APA, Harvard, Vancouver, ISO, and other styles
28

Sachan, Kapil. "Constrained Adaptive Control of Nonlinear Systems with Application to Hypersonic Vehicles." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4415.

Full text
Abstract:
Constraints in input, output, and states are evident in most of the practical systems. Explicitly incorporating these constraints into the control design process leads to its superior performance in general. Therefore, considering different types of constraints, several robust constrained adaptive nonlinear control designs are proposed in this thesis for different classes of uncertain nonlinear. In the first part of this thesis, a barrier Lyapunov function (BLF) based state constrained adaptive control design is presented for two different classes of uncertain nonlinear systems, known as nonlinear systems with relative degree one and Euler-Lagrange systems. In adaptive control synthesis, a neural network-based approximated system dynamics is constructed to approximate the model uncertainties of the system, and then a tracking controller is designed to achieve the desired tracking response. The weights of the neural network are updated using a Lyapunov stable weight update rule. It is shown that the closed-loop states of the system both remain bounded within the imposed constraints as well as asymptotically converge to a predefined domain. In the second part of this thesis, error transformation based state-constrained adaptive control design is proposed for generic second-order nonlinear systems with state and input constraints, model uncertainties, and external disturbances. A new error transformation is proposed to enforce state constraints; Nussbaum gain is used to impose desired input constraints, and radial basis function neural networks (RBFNNs) are utilized to approximate modeling uncertainties. In this control design philosophy, first, imposed constraints are converted into error constraints and then, using the proposed error transformation, the constrained system is transformed into equivalent unconstrained system. Next, a stable adaptive controller is designed for the unconstrained system, which indirectly establishes the stability of the constrained system without violation of imposed constraints. The closed-loop stability of the system is proven using the Lyapunov stability theory. In the third part of this thesis, an adaptive controller is derived for a feedback linearizable MIMO nonlinear system subjected to time-varying output constraints, input constraints, unknown control directions, modeling uncertainties, and external disturbances. In the control design, another novel error transformation is used to enforce time-varying output constraints, and Nussbaum gain is used to handle input constraints and unknown control directions. One of the features of the proposed adaptive controller is that only a single variable is required to approximate the uncertainties of the whole system, consequently minimizing the computational requirement. Another feature of the proposed controller is that zero error tracking is achieved in presence of unstructured uncertainties and external disturbances. The aforementioned control designs are scalable and can be reduced into output-constrained control and unconstrained control by changing the nature of error dependent controller gain matrices. Controllers can also be used to constrain the closed-loop error of the system directly, thereby minimizing error transients. Furthermore, proposed controllers give flexibility to impose independent constraints on the desired component of the system states and lead to easy on-board implementable closed-form control solutions. A two-link robot manipulator problem and other benchmark problems are used to demonstrate the effectiveness of the proposed control designs by performing extensive simulation results. In the last part of this thesis, a real-life application problem is selected, where the objective is to effectively control a hypersonic flight vehicle during its cruise. The problem is quite challenging as it demands narrow bounds on both input and state in the presence of large modeling uncertainties. The control objective is achieved by using the proposed BLF based constrained adaptive controller. A three-loop architecture is proposed to synthesis this adaptive flight controller, which ensures that the vehicle velocity, attitude and angular body rates remain bounded within the prescribed bounds. The proposed adaptive control leads to quick learning of the unknown function in the system dynamics with much lesser transients. It also ensures that the imposed state constraints are not violated at any point of time. Effectiveness of the control design is illustrated by carrying out a large number of Monte-Carlo like randomized high fidelity six-degree-of-freedom (Six-DOF) simulation studies for a winged-cone hypersonic vehicle. The Six-DOF model was constructed by collecting the necessary aerodynamic and inertial data of the vehicle found scattered in various literature and integrating those into the air-frame equations of motion. Simulation results show that the proposed controller is quite robust to effectively control the vehicle in the presence of significant modeling uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Jyh Horng, and 吳志鴻. "An Error-Free Euclidean Distance Transformation." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/77575564889671198189.

Full text
Abstract:
碩士
國立中山大學
電機工程研究所
83
Distance transformation is a fundamental technique for the application fields of image understanding and computer vision. Some important characteristics in image analysis such as shape factor, skeleton and medial axis are based upon the distance transformation computation. Euclidean distance is by all means the most natural and realistic distance and is thus the most demanded distance for the above applications. Discrete Euclidean distance has a different geometry issue from city- block distance or chessbload distance since the Euclidean distance does not agree the length along the grid points as the and distances do. As a result an absolutely accurate Euclidean distance transformation can be just obtained from a global approach which is time consuming and memory costy. In constrast, city-block distance and chessboard distace distances can be computed by a local approach which usually takes two scans: one for top-dowm and the other for bottom-up. In this research, an algorithm for computing Euclidean distance is developed by the above computation structure for efficiency. Our success of the local approach of Euclidean distance transformation is based upon the design of a candidates look-up table, which combines Euclidean geometry and a local candidates table implementation. The understanding of geometry specifies the global candidates which need to be selected for the shortest distance. The global candidates is transformed to a local array by a vertical projection. In addition to the fast computation by our local approach, our memory space for improving the look-up table is also economic by exploiting a concept of hashing function, which reduces a 4-variables to 2-variables look-up table. Finally, a proof by mathematic induction is presented for a guarantee the absolute accuracy of our computation of Euclidean distance.
APA, Harvard, Vancouver, ISO, and other styles
30

Chang, Hon-Hang, and 張閎翰. "Irregular Image Transformation and Balanced Error Distribution for Video Seam Carving." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/66233893384606489158.

Full text
Abstract:
博士
國立中央大學
資訊工程學系
105
In recent years, more and more image retargeting techniques have been proposed to facilitate our daily life, in particular those based on the use of seam carving, warping or the combination of them. Although these techniques are rather sophisticated, they only work for a specific pattern during retargeting. For example, these techniques can only retarget the source picture into the same shape of a square, but cannot be easily reshaped into a circular, a polygon or other shapes. This paper focuses on creating a graphics editing system, named CMAIR (Content and Mask-Aware Image Retargeting), for image retargeting, which can retarget the source images into diff erent shapes of image to highlight the salient objects of primary interest. CMAIR eff ectively supports removal of unimportant pixels, and frames as many surrounding objects inside the provided mask as possible. In this paper, we propose a unique irregular interpolation method to produce four possible target images, and an evaluation mechanism to decide the best candidate image as the final output with the consideration of image saliency. The results show that not only the source image can be placed into different targeted shapes of mask, but also the salient objects are retained and highlighted as much as possible. As a result, the salient objects become more clear in our eyes. Besides, a video retargeting method named BED is proposed in this study, and the proposed algorithm in this thesis focuses on maintaining the structure of straight lines and irregular shape of objects without deforming complex image contents, which may be altered in traditional seam carving of complex image or video. In addition, the proposed mechanism maintains visual continuity such that the resulting videowill not look shaky due to sudden changes in the background. Practical applicability of the proposed method was tested using both regular videos and special videos which contain vanishing lines (i.e., perspective eff ects). Experimental results demonstrate that the proposed CMAIR can convert the rectangular image to another irregular shape of image, and all the salient objects of the targeted images are preserved. In our video results using BED, the BED approach not only resizes the video with the retention of important information, but also maintains the structural properties of objects in varios kinds of videos. The demonstration website at http://video.minelab.tw/DETS/ EDSeamCarving/ and http://video.minelab.tw/DETS/VRDSF/ provides comparison results that illustrate the contribution of our method.
APA, Harvard, Vancouver, ISO, and other styles
31

Amirian, Ghasem [Verfasser]. "Transformation of tracking error in parallel kinematic machining / von Ghasem Amirian." 2008. http://d-nb.info/991997352/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chung, Leng Hung, and 鐘年宏. "THE COORDINATE TRANSFORMATION AND ERROR ANALYSIS COMPENSATION FOR FIVE FACE MACHINING CENTER." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/39264222687526367409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lo, Chien-Tai, and 羅建台. "Utilizing Adaptive Time-Variant Transformation Model for Mobile Platform Positioning Error Adjustment." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/52609724022850632394.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
102
A mobile mapping system (MMS) utilizes the global navigation satellite system (GNSS) and inertial navigation system (INS) techniques and thus makes possible a direct georeferencing solution everywhere along its surveyed path. It is capable of acquiring a vast number of spatial information in an efficient manner and is adopted in a wide variety of applications. Nevertheless, when the GNSS signal is obstructed, its positioning solution can only count on the INS observables, which possesses significant and cumulative errors across time. In order to compensate for the position error in a GNSS-denied area, a time-variant adjustment model is developed in this study. Moreover, an adaptive algorithm is proposed to improve the efficiency and reliability of the error adjustment analysis. Based on the results from a case study, it is demonstrated that the positioning error of a mobile mapping platform in an urban area could reach a level of several meters due to GNSS signal obstructions. However, when the proposed approach is applied, the error can be significantly reduced to a centimeter-level. As a result, the applicability of the mobile mapping technique can be further extended to a GNSS-hostile area where current methods are limited.
APA, Harvard, Vancouver, ISO, and other styles
34

Ming-HsunFu and 符明勛. "Non-iterative Method of Seven-Parameter Similarity Transformation and Gross Error Detection." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/jtak2w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Hung-Shin, and 李鴻欣. "Classification Error-based Linear Discriminative Feature Transformation for Large Vocabulary Continuous Speech Recognition." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/6w7r9s.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊工程研究所
97
The goal of linear discriminant analysis (LDA) is to seek a linear transformation that projects an original data set into a lower-dimensional feature subspace while simultaneously retaining geometrical class separability. However, LDA cannot always guarantee better classification accuracy. One of the possible reasons lies in that its criterion is not directly associated with the classification error rate, so that it does not necessarily accommodate itself to the allocation rule governed by a given classifier, such as that employed in automatic speech recognition (ASR). In this thesis, we extend the classical LDA by leveraging the relationship between the empirical phone classification error rate and the Mahalanobis distance for each respective phone class pair. To this end, we modify the original between-class scatter from a measure of the Euclidean distance to the pairwise empirical classification accuracy for each class pair, while preserving the lightweight solvability and taking no distributional assumption, just as what LDA does. Furthermore, we also present a novel discriminative linear feature transformation, named generalized likelihood ratio discriminant analysis (GLRDA), on the basis of the likelihood ratio test (LRT). It attempts to seek a lower dimensional feature subspace by making the most confusing situation, described by the null hypothesis, as unlikely to happen as possible without the homoscedastic assumption on class distributions. We also show that the classical linear discriminant analysis (LDA) and its well-known extension – heteroscedastic linear discriminant analysis (HLDA) are just two special cases of our proposed method. The empirical class confusion information can be further incorporated into GLRDA for better recognition performance. Experimental results demonstrate that our approaches yields moderate improvements over LDA and other existing methods, such as HLDA, on the Chinese large vocabulary continuous speech recognition (LVCSR) task.
APA, Harvard, Vancouver, ISO, and other styles
36

"General transformation model with censoring, time-varying covariates and covariates with measurement errors." Thesis, 2008. http://library.cuhk.edu.hk/record=b6074722.

Full text
Abstract:
Because of the measuring instrument or the biological variability, many studies with survival data involve covariates which are subject to measurement error. In such cases, the naive estimates are usually biased. In this thesis, we propose a bias corrected estimate of the regression parameter for the multinomial probit regression model with covariate measurement error. Our method handles the case when the response variable is subject to interval censoring, a frequent occurrence in many medical and health studies where patients are followed periodically. A sandwich estimator for the variance is also proposed. Our procedure can be generalized to general measurement error distribution as long as the first four moments of the measurement error are known. The results of extensive simulations show that our approach is very effective in eliminating the bias when the measurement error is not too large relative to the error term of the regression model.
Censoring is an intrinsic part in survival analysis. In this thesis, we establish the asymptotic properties of MMLE to general transformation models when data is subject to right or left censoring. We show that MMLE is not only consistent and asymptotically normal, but also asymptotically efficient. Thus our asymptotic results give a definite answer to a long-term argument on the efficiency of the maximum marginal likelihood estimator. The difficulty in establishing these results comes from the fact that the score function derived from the marginal likelihood does not have ordinary independence or martingale structure. We will develop a discretization method in establishing our results. As a special case, our results imply the consistency, asymptotic normality and efficiency for the multinomial probit regression, a popular alternative to the Cox regression model.
General transformation model is an important family of semiparametric models in survival analysis which generalizes the linear transformation model. It not only includes typical Cox regression model, proportional odds model and multinomial probit regression model, but also includes heteroscedastic hazard regression model, general heteroscedastic rank regression model and frailty model. By maximizing the marginal likelihood, a parameter estimation (MMLE) can be obtained with the property that it avoids estimating the baseline survival function and censoring distribution, and such property is enjoyed by the Cox regression model. In this thesis, we study three areas of generalization of general transformation models: main response variable is subject to censoring, covariates are time-varying and covariates are subject to measurement error.
In medical studies, the covariates are not always the same during the whole period of study. Covariates may change at certain time points. For example, at the beginning, n patients accept drug A as treatment. After certain percentage of patients have died, the investigator might add new drug B to the rest of the patients. This corresponds to the case of time-varying covariates. In this thesis, we propose an estimation procedure for the parameters in general transformation model with this type of time-varying covariates. The results of extensive simulations show that our approach works well.
Wu, Yueqin.
Adviser: Ming Gao Gu.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3589.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 74-78).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
37

Genga, Yuval Odhiambo. "Modifications to the symbol wise soft input parity check transformation decoding algorithm." Thesis, 2016. http://hdl.handle.net/10539/20590.

Full text
Abstract:
A dissertation submitted in fulfilment of the requirements for the degree of Master of Science in the Centre for Telecommunication Access and Services, School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, 2016
Reed-Solomon codes are very popular codes used in the field of forward error correction due to their correcting capabilities. Thus, a lot of research has been done dedicated to the development of decoding algorithms for this class of code. [Abbreviated Abstract. Open document to view full version]
APA, Harvard, Vancouver, ISO, and other styles
38

Chang, Chia Lin, and 張嘉麟. "Transformation of Data Points Taken from Different Coordinate Measuring Systems under the Influence of Measuring Error." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/33369304536057600406.

Full text
Abstract:
碩士
大葉大學
工業工程與科技管理學系
95
ABSTRACT In many 3D designs computer models cannot be completed in once due to the vast artifact volume and the measuring limitation of digitizing machine. To accommodate the shortage of that, in this research RHINO and Microscribe are combined to digitize the artifacts of human head and hand in a section-by-section measurements; afterwards a coordinate measurements transformation is applied and put all the measurements from all sections into one common coordinate system. It then followed by taking the characteristics points in the artifacts and their counterparts in the computer model. A distance matrix is formed among the characteristics points for these two (original artifact and computer model) and the difference between the matrices is extracted thereby a signal-to-noise ratio, SN ratio, is used as a criterion to evaluate the match between them. A study on the relationship between the SN ratios and the number of characteristics points is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
39

Chang, Yi Ching, and 張依靜. "Strengthen Geography Transformation and Forgery Resistance of Fragile Watermarking with 3D Model by Combining Error-Correcting Code and Cryptography." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/54956488676099949340.

Full text
Abstract:
博士
國立中興大學
資訊科學與工程學系
105
Multimedia is text, audio, two-dimensional and three-dimensional image, two-dimensional and three-dimensional video, and three-dimensional mesh collectively. Since the technology of computer has grown, there are more and more examples that people take an existed multimedia from Internet and claim it is their work after making a little alteration. Innovation and creativity are important for products. Therefore, copyright is an issue that has become importance and needs to be protected. There are some literature on the subject. The algorithms were designed on blind fragile watermarking in this thesis. Utilize error correcting code and cryptography to enhance the attack verification ability. The objective is to protect the copyright of three-dimensional models. In this research, the meshes were used from a public database. These meshes refer to the virtual three-dimensional space in a computer by the point, line, surface, triangle, curve, and other geometric graphics and color design combination of virtual three-dimensional polygon modeling. The watermark can be used for detecting malicious modification and marking out the area from an attacked stego media. Therefore, the watermark should be embedded into all over the cover media as much as possible. However, embedding information will make some alterations on the media. It is tolerable if it cannot be noticed under human senses. The distortion ratio of each model is shown in experimental results. This research can be divided into three parts. All of them are designed for protecting the polygonal meshes on computer. Experimental results show that they are efficient in reality applications. (a)Extract data bits from a cover model and encrypted by error correcting code. Considered the output of the coding procedure is a watermark, which is embedded into the cover model itself. The advantage is that it can achieve lower time complexity than previous literature, control distortion ratio bellow 10-6, contain100% embedding capacity, and get efficient verification. (b)Use cryptography, Feedback Mode, to encrypt a cover model. Extract a vertex as a plaintext, and consider the cyphertext as a watermark, which is embedded in the last significant bit (LSB) under decimal six or less after convert into binary. The advantage is that replace the LSBs can control the distortion rate. In the experiment results show that this algorithm can have high watermark embedding rate, high detect efficiency, low distortion rate and even more, higher security. (c)A three-dimensional model has three coordinates (X,Y,Z). A spherical coordinate system (r,φ,θ) combine information with vertex, distance, and angle unit. Interaction between this three information can increase the tenacity of the algorithm. So that users can use a non-malicious way to moderately modify the appearance of a model, such as, scanning, zooming, and shifting, which is called affine transformation or geographic transformation. A geographic transformation is a mathematical operation that converts the coordinates of a point in one geographic coordinate system to the coordinates of the same point in another geographic coordinate system. A three-dimensional model is referred to virtual three-dimensional space in a computer by point, line, and surface. In a computer, a model is recorded by Cartesian coordinate system. Convert an input model from Cartesian coordinate system into spherical coordinate system in the very beginning. Then apply the first part and the second part encoding algorithm respectively by using a referenced vertex. The experiment results show that spherical coordinate system can resist affine attack efficient and still achieve high malicious attack detection rate.
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Neng-Hsuan, and 張能軒. "A Study for Influence of Height Error on 3-D & 2-D Coordinate Transformation Precision in Large Region." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/62017731462052171205.

Full text
Abstract:
碩士
中興大學
土木工程學系所
99
To explore the effect of height error on the coordinate transformation accuracy of three dimensional and two dimensional coordinates between TWD97 and TWD67 on the basis of Taiwan geodetic datum TWD97, this paper conducts the conversion between TWD97 and TWD67 on the second-order satellite control points in central Taiwan according to the seven-parameter transformation model of Molodensky-Badekas and Bursa-Wolf. Without the ellipsoid height information of TWD67, this paper chooses the common points to ensure the ellipsoid height of TWD67 and of TWD97 is the same before conducting the conversion. There are two methods applied in this paper to compare the ellipsoid height of the common points. The first method is to return the ellipsoid height of both TWD67 and TWD97 at the common points to zero and change the ellipsoid height of TWD67 leading the variation to a constant (±10、±20、±25M). The second method is to change the ellipsoid height of TWD67 with random variables to create constants (±1~3、±3~5、±5~10M) for comparing the effect of height variation on the conversion accuracy of three dimensional and two dimensional coordinates between TWD97 and TWD67. The result shows that the seven-parameter transformation model of Molodensky-Badekas and Bursa-Wolf applied in this study presents the same conversion result. The conversion results of TWD67 and TWD97 with the same height, method one (changing the ellipsoid height of TWD67), and method two (changing the ellipsoid height within 3-5M) hardly affect the plane coordinate, with only ± 1cm root-mean-square error of (dN,dE) and dh root-mean-square error is about the average of the height variation for the common points.
APA, Harvard, Vancouver, ISO, and other styles
41

Lai, Yi-Chun, and 賴怡君. "Examining the Effects of Safety-specific Transformational Leadership and Psychological Safety on Safety Voice and Safety Compliance: A Moderating Role of Error Climate." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/s96xf5.

Full text
Abstract:
碩士
國立交通大學
管理科學系所
106
This study investigated the antecedents of safety voice and clarified how to improve safety compliance. Building on social exchange theory, I proposed that safety-specific transformational leadership increases safety voice, leading to positive safety compliance. Psychological safety as a group level factor can be related to enhanced safety voice. Furthermore, through signaling theory, error climate can moderate the relationship between safety voice and safety compliance. To test the above assumptions, data were collected from supervisors (N = 23) accordingly with their subordinates (N = 196) within group units of a semiconductor technology company in Hsinchu Science Park (Taiwan). Results were consistent with my hypotheses that (a) safety-specific transformational leadership was significant-positively related to safety voice; (b) safety voice was positively related to safety compliance; (c) the effect of safety-specific transformational leadership on safety compliance was mediated by safety voice; (d) psychological safety was positively related to safety voice; (e) the influence of safety voice on safety compliance was moderated by error climate, such that the direct effect of safety voice on safety compliance was more positive when error climate was high than when error climate was low. Therefore, organizations can work on developing safe workplace by safety leadership and safety-related climates for expected safety outcomes.
APA, Harvard, Vancouver, ISO, and other styles
42

Jee, Kangkook. "On Efficiency and Accuracy of Data Flow Tracking Systems." Thesis, 2015. https://doi.org/10.7916/D8MG7P9D.

Full text
Abstract:
Data Flow Tracking (DFT) is a technique broadly used in a variety of security applications such as attack detection, privacy leak detection, and policy enforcement. Although effective, DFT inherits the high overhead common to in-line monitors which subsequently hinders their adoption in production systems. Typically, the runtime overhead of DFT systems range from 3× to 100× when applied to pure binaries, and 1.5× to 3× when inserted during compilation. Many performance optimization approaches have been introduced to mitigate this problem by relaxing propagation policies under certain conditions but these typically introduce the issue of inaccurate taint tracking that leads to over-tainting or under-tainting. Despite acknowledgement of these performance / accuracy trade-offs, the DFT literature consistently fails to provide insights about their implications. A core reason, we believe, is the lack of established methodologies to understand accuracy. In this dissertation, we attempt to address both efficiency and accuracy issues. To this end, we begin with libdft, a DFT framework for COTS binaries running atop commodity OSes and we then introduce two major optimization approaches based on statically and dynamically analyzing program binaries. The first optimization approach extracts DFT tracking logics and abstracts them using TFA. We then apply classic compiler optimizations to eliminate redundant tracking logic and minimize interference with the target program. As a result, the optimization can achieve 2× speed-up over base-line performance measured for libdft. The second optimization approach decouples the tracking logic from execution to run them in parallel leveraging modern multi-core innovations. We apply his approach again applied to libdft where it can run four times as fast, while concurrently consuming fewer CPU cycles. We then present a generic methodology and tool for measuring the accuracy of arbitrary DFT systems in the context of real applications. With a prototype implementation for the Android framework – TaintMark, we have discovered that TaintDroid’s various performance optimizations lead to serious accuracy issues, and that certain optimizations should be removed to vastly improve accuracy at little performance cost. The TaintMark approach is inspired by blackbox differential testing principles to test for inaccuracies in DFTs, but it also addresses numerous practical challenges that arise when applying those principles to real, complex applications. We introduce the TaintMark methodology by using it to understand taint tracking accuracy trade-offs in TaintDroid, a well-known DFT system for Android. While the aforementioned works focus on the efficiency and accuracy issues of DFT systems that dynamically track data flow, we also explore another design choice that statically tracks information flow by analyzing and instrumenting the application source code. We apply this approach to the different problem of integer error detection in order to reduce the number of false alarmings.
APA, Harvard, Vancouver, ISO, and other styles
43

Jiao, Yibo. "Compensation-oriented quality control in multistage manufacturing processes." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-08-5961.

Full text
Abstract:
Significant research has been initiated recently to devise control strategies that could predict and compensate manufacturing errors using so called explicit Stream-of-Variation(SoV) models that relate process parameters in a Multistage Manufacturing Process (MMP) with product quality. This doctoral dissertation addresses several important scientific and engineering problems that will significantly advance the model-based, active control of quality in MMPs. First, we will formally introduce and study the new concept of compensability in MMPs, analogous to the concept of controllability in the traditional control theory. The compensability in an MMP is introduced as the property denoting one’s ability to compensate the errors in quality characteristics of the workpiece, given the allocation and character of measurements and controllable tooling. The notions of “within-station” and “between-station” compensability are also introduced to describe the ability to compensate upstream product errors within a given operation or between arbitrarily selected operations, respectively. The previous research also failed to concurrently utilize the historical and on-line measurements of product key characteristics for active model-based quality control. This dissertation will explore the possibilities of merging the well-known Run-to-Run (RtR) quality control methods with the model-based feed-forward process control methods. The novel method is applied to the problem of control of multi-layer overlay errors in lithography processes in semiconductor manufacturing. In this work, we first devised a multi-layer overlay model to describe the introduction and flow of overlay errors from one layer to the next, which was then used to pursue a unified approach to RtR and feedforward compensation of overlay errors in the wafer. At last, we extended the existing methodologies by considering inaccurately indentified noise characteristics in the underlying error flow model. This is also a very common situation, since noise characteristics are rarely known with absolute accuracy. We formulated the uncertainty in process noise characteristics using Linear Fractional Transformation (LFT) representation and solved the problem by deriving a robust control law that guaranties the product quality even under the worst case scenario of parametric uncertainties. Theoretical results have been evaluated and demonstrated using a linear state-space model of an actual industrial process for automotive cylinder head machining.
text
APA, Harvard, Vancouver, ISO, and other styles
44

Vercueil, Megan. "A select bouquet of leadership theories advancing good governance and business ethics: a conceptual framework." Thesis, 2020. http://hdl.handle.net/10500/27355.

Full text
Abstract:
How authors and scholars have approached leadership studies – in terms of their thinking, defining and studying – has changed remarkably over time. According to literature, this is predominantly due to greater optimism about the field and greater methodological diversity being employed to better understand complex, embedded phenomena. As a result, there has been a significant rise in the use of qualitative research approaches to the study of leadership. Numerous definitions, classifications, explanations and theories about leadership, exist in the contemporary literature. However, despite the vast array of literature, the challenge of failing leadership persists. Challenges, such as the speed of technological advancements, social, and economic change are ever-present, while the impact of COVID-19 is, as yet, uncertain. Despite these challenges, can companies compete successfully in the marketplaces they operate in while also remaining ethical and engaged with the challenges of the broader business and social environment? To answer this question, this study has undertaken qualitative research on the bouquet of trait, situational and value-based leadership theory, in order to re-assess both established and developing theories. The predominant aim is to describe, explain and analyse available literature in an attempt to ascertain academic guidance on how it might be possible to enable leaders and society to mitigate leadership challenges by proposing a conceptual framework that could support leadership theory and, in so doing, take an academic stance in providing better answers or guidance to the failures currently being experienced. Several authors have noted that leadership makes a difference with resulting impacts on many which implies that to make the world a better place, leadership has two contradictory elements; good and bad. These elements are reflected in today’s connected world where the media, either showers praise on leaders or writes articles deriding their incompetence and abuse of their roles at all levels The proposed conceptual framework of this study endeavours to enable society and leaders, practically and at an individual level, to evaluate leadership issues and link leadership frameworks to their everyday lives and, in so doing, aid in mitigating the challenges being faced.
Business Management
D.B.L.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography