Rozprawy doktorskie na temat „Error Transformations”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 44 najlepszych rozpraw doktorskich naukowych na temat „Error Transformations”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Gul, Yusuf. "Entanglement Transformations And Quantum Error Correction". Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610773/index.pdf.
Pełny tekst źródłaSuh, Sangwook. "Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42716.
Pełny tekst źródłaTAKEDA, Kazuya, Norihide KITAOKA i Makoto SAKAI. "Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria". Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/14970.
Pełny tekst źródłaLau, Buon Kiong. "Applications of Adaptive Antennas in Third-Generation Mobile Communications Systems". Thesis, Curtin University, 2002. http://hdl.handle.net/20.500.11937/2019.
Pełny tekst źródłaLehmann, Rüdiger. "Ein automatisches Verfahren für geodätische Berechnungen". Hochschule für Technik und Wirtschaft Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-188715.
Pełny tekst źródłaMannem, Narender Reddy. "Adaptive Data Rate Multicarrier Direct Sequence Spread Spectrum in Rayleigh Fading Channel". Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125782227.
Pełny tekst źródłaLau, Buon Kiong. "Applications of Adaptive Antennas in Third-Generation Mobile Communications Systems". Curtin University of Technology, Australian Telecommunications Research Institute, 2002. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=12983.
Pełny tekst źródłaSuitable optimisation techniques are then applied to obtain more robust transformations. The improved transformations are shown to improve robustness but at the cost of larger transformation errors. The benefits of the robustification procedure are most apparent in DOA estimation. In addition to the algorithm level studies, the thesis also investigates the use of AAS technology with respect to two different third generation (3G) mobile communications systems: Enhanced Data rates for Global Evolution (EDGE) and Wideband Code Division Multiple Access (WCDMA). EDGE, or more generally GSM/EDGE Radio Access Network (GERAN), is the evolution of the widely successful GSM system to provide 3G mobile services in the existing radio spectrum. It builds on the TDMA technology of GSM and relies on improved coding and higher order modulation schemes to provide packet-based services at high data rates. WCDMA, on the other hand, is based on CDMA technology and is specially designed and streamlined for 3G mobile services. For WCDMA, a single-user approach to DOA estimation which utilises the user spreading code and the pulse-shaped chip waveform is proposed. It is shown that the proposed approach produces promising performance improvements. The studies with EDGE are concerned with the evaluation of a simple AAS at the system and link levels.
Results from, the system and link level simulations are presented to demonstrate the effectiveness of AAS technology in the new mobile communications system. Finally, it is noted that the WCDMA and EDGE link level simulations employ the newly developed COST259 directional channel model, which is capable of producing accurate channel realisations of macrocell environments for the evaluation of AAS's.
Halbach, Till. "Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks". Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-136.
Pełny tekst źródłaThis dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.
In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.
The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.
A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin.
Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection
Nestler, Franziska. "Automated Parameter Tuning based on RMS Errors for nonequispaced FFTs". Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-160989.
Pełny tekst źródłaDridi, Marwa. "Sur les méthodes rapides de résolution de systèmes de Toeplitz bandes". Thesis, Littoral, 2016. http://www.theses.fr/2016DUNK0402/document.
Pełny tekst źródłaThis thesis aims to design new fast algorithms for numerical computation via the Toeplitz matrices. First, we introduced a fast algorithm to compute the inverse of a triangular Toeplitz matrix with real and/or complex numbers based on polynomial interpolation techniques. This algorithm requires only two FFT (2n) is clearly effective compared to predecessors. A numerical accuracy and error analysis is also considered. Numerical examples are given to illustrate the effectiveness of our method. In addition, we introduced a fast algorithm for solving a linear banded Toeplitz system. This new approach is based on extending the given matrix with several rows on the top and several columns on the right and to assign zeros and some nonzero constants in each of these rows and columns in such a way that the augmented matrix has a lower triangular Toeplitz structure. Stability of the algorithm is discussed and its performance is showed by numerical experiments. This is essential to connect our algorithms to applications such as image restoration applications, a key area in applied mathematics
Pospíšilík, Oldřich. "Standardy a kódování zdrojového kódu PHP". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-237471.
Pełny tekst źródłaQureshi, Muhammad Ayyaz [Verfasser], Thomas [Akademischer Betreuer] Eibert i Hendrik [Akademischer Betreuer] Rogier. "Near-Field Error Analysis and Efficient Sampling Techniques for the Fast Irregular Antenna Field Transformation Algorithm / Muhammad Ayyaz Qureshi. Gutachter: Thomas Eibert ; Hendrik Rogier. Betreuer: Thomas Eibert". München : Universitätsbibliothek der TU München, 2013. http://d-nb.info/1045345717/34.
Pełny tekst źródłaDamouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods". Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.
Pełny tekst źródłaCritical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
Vacher, André. "Calcul cablé d'une transformée de Fourier à très grand nombre d'échantillons, éventuellement multidimensionnelle". Grenoble INPG, 1997. http://www.theses.fr/1997INPG0020.
Pełny tekst źródłaJacobson, Craig. "INTERNATIONAL SPACE STATION REMOTE SENSING POINTING ANALYSIS". Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3308.
Pełny tekst źródłaM.S.A.E.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Aerospace Engineering
Куц, Юрій Вікторович, i Yurii Kuts. "Метод підвищення точності вимірювань радіосигналів при адаптивній фільтрації". Master's thesis, ТНТУ ім. І. Пулюя, Факультет прикладних інформаційних технологій та електроінженерії, кафедра біотехнічних систем, 2021. http://elartu.tntu.edu.ua/handle/lib/36525.
Pełny tekst źródłaThe qualification work developed an algorithm for adaptive search of optimal filter parameters, in particular, the analysis of the studied system based on three input signals: sinusoidal, Heaviside function, pulse, and determined that the most efficient and effective filtering signal Chebyshev filter. The dynamic error with this filter has been reduced up to 2 times.
ВСТУП 9 РОЗДІ 1. ОСНОВНА ЧАСТИНА 11 1.1. Загальна характеристика та визначення динамічної похибки 11 1.2. Спектральні методи фільтрації вимірювальних сигналів 15 1.2.1. Фільтрація вимірювальних сигналів методом поліноміальної ортогоналізації 15 1.2.2. Фільтрація вимірювальних сигналів формуванням рядів Фур'є 15 1.3. Екстремальний метод фільтрації вимірювальних сигналів 21 1.4. Метод введення в структуру коригувальних ланок 22 1.5. Висновки до розділу 1 23 РОЗДІЛ 2. ОСНОВНА ЧАСТИНА 24 2.1. Загальні уявлення про фільтри 24 2.2. Необхідність дискретних фільтрів 29 2.3. Обмеження точності дискретних фільтрів 30 2.4. КІХ-фільтри з лінійною фазово-частотною характеристикою 31 2.5. Проектування КІХ-фільтрів 31 2.6. Типи дискретних фільтрів 32 2.7. Порівняння між КІХ- та БІХ-фільтрами 34 2.8 Порівняння аналогових та дискретних фільтрів 34 2.9 Висновки до розділу 2 35 РОЗДІЛ 3. НАУКОВО-ДОСЛІДНА ЧАСТИНА 36 3.1. Постановка задачі 36 3.2. Використання програмного середовища MATLAB 37 3.3. Поняття адаптивного фільтра 38 3.4. Алгоритм корекції динамічної похибки 39 3.5. Розрахунок СКВ 39 3.6. Критерій мінімуму СКВ 41 3.7. Безперервна модель 42 3.8. Фільтрація сигналів 43 3.9. Математичне моделювання системи 47 3.10. Прямокутне вікно 48 3.10.1. Перевірка роботи вимірювальної системи на основі функції Хевісайду на вході ВП 48 3.10.2. Перевірка роботи вимірювальної системи на основі імпульсного сигналу на вході ВП 54 3.11. Трикутне вікно 60 3.11.1. Перевірка роботи вимірювальної системи на основі функції Хевісайду на вході ВП 60 3.11.2. Перевірка роботи вимірювальної системи на основі імпульсного сигналу на вході ВП 65 3.12. Висновки до розділу 3 73 РОЗДІЛ 4. ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 74 4.1. Охорона праці 74 4.2. Безпека в надзвичайних ситуаціях 76 4.3. Висновки до розділу 4 80 ВИСНОВКИ 81 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 82 Додаток А. Лістинг програми прямокутного вікна 84 Додаток Б. Лістинг програми трикутного вікна 86 Додаток В. Лістинг програми прямокутного вікна 88 Додаток Г. Копія тези конференції 90
Nieuwoudt, Christoph. "Cross-language acoustic adaptation for automatic speech recognition". Thesis, Pretoria : [s.n.], 2000. http://upetd.up.ac.za/thesis/available/etd-01062005-071829.
Pełny tekst źródłaPippig, Michael. "Massively Parallel, Fast Fourier Transforms and Particle-Mesh Methods: Massiv parallele schnelle Fourier-Transformationen und Teilchen-Gitter-Methoden". Doctoral thesis, Universitätsverlag der Technischen Universität Chemnitz, 2015. https://monarch.qucosa.de/id/qucosa%3A20398.
Pełny tekst źródłaDie vorliegende Dissertation beschreibt einen modularisierten Blick auf die Struktur schneller numerischer Methoden für die Berechnung der Coulomb-Wechselwirkungen zwischen Ladungen im dreidimensionalen Raum. Die gemeinsame Struktur ist geprägt durch drei selbstständige und auf einander aufbauenden Algorithmen, nämlich der schnellen Fourier-Transformation (FFT), der nicht äquidistanten schnellen Fourier-Transformation (NFFT) und der NFFT-basierten Teilchen-Gitter-Methode (P²NFFT). Für jeden dieser Algorithmen werden Verbesserungen und parallele Implementierungen vorgestellt mit besonderem Augenmerk auf massiv paralleler Skalierbarkeit. Im Kontext der FFT werden parallele Algorithmen aus den Hardware adaptiven Modulen der FFTW Softwarebibliothek zusammengesetzt. Die neuen NFFT-Konzepte beinhalten abgeschnittene NFFT, Versatz, analytische Differentiation und optimierte Entfaltung im Fourier-Raum bezüglich des mittleren quadratischen Aliasfehlers. Mit Hilfe dieser Verallgemeinerungen bietet die NFFT einen vereinheitlichten Zugang zu Teilchen-Gitter-Methoden. Insbesondere gemischt periodische Randbedingungen werden einheitlich behandelt und Versatz wird effizienter umgesetzt. Heuristiken für die Parameterwahl werden auf Basis sorgfältiger Fehlerabschätzungen angegeben.
Sundström, David. "On specification and inference in the econometrics of public procurement". Doctoral thesis, Umeå universitet, Nationalekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-121681.
Pełny tekst źródłaRaillon, Loic. "Experimental identification of physical thermal models for demand response and performance evaluation". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI039.
Pełny tekst źródłaThe European Union strategy for achieving the climate targets, is to progressively increase the share of renewable energy in the energy mix and to use the energy more efficiently from production to final consumption. It requires to measure the energy performance of buildings and associated systems, independently of weather conditions and user behavior, to provide efficient and adapted retrofitting solutions. It also requires to known the energy demand to anticipate the energy production and storage (demand response). The estimation of building energy demand and the estimation of energy performance of buildings have a common scientific: the experimental identification of the physical model of the building’s intrinsic behavior. Grey box models, determined from first principles, and black box models, determined heuristically, can describe the same physical process. Relations between the physical and mathematical parameters exist if the black box structure is chosen such that it matches the physical ones. To find the best model representation, we propose to use, Monte Carlo simulations for analyzing the propagation of errors in the different model transformations, and factor prioritization, for ranking the parameters according to their influence. The obtained results show that identifying the parameters on the state-space representation is a better choice. Nonetheless, physical information determined from the estimated parameters, are reliable if the model structure is invertible and the data are informative enough. We show how an identifiable model structure can be chosen, especially thanks to profile likelihood. Experimental identification consists of three phases: model selection, identification and validation. These three phases are detailed on a real house experiment by using a frequentist and Bayesian framework. More specifically, we proposed an efficient Bayesian calibration to estimate the parameter posterior distributions, which allows to simulate by taking all the uncertainties into account, which is suitable for model predictive control. We have also studied the capabilities of sequential Monte Carlo methods for estimating simultaneously the states and parameters. An adaptation of the recursive prediction error method into a sequential Monte Carlo framework, is proposed and compared to a method from the literature. Sequential methods can be used to provide a first model fit and insights on the selected model structure while the data are collected. Afterwards, the first model fit can be refined if necessary, by using iterative methods with the batch of data
Pippig, Michael. "Massively Parallel, Fast Fourier Transforms and Particle-Mesh Methods". Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-197359.
Pełny tekst źródłaDie vorliegende Dissertation beschreibt einen modularisierten Blick auf die Struktur schneller numerischer Methoden für die Berechnung der Coulomb-Wechselwirkungen zwischen Ladungen im dreidimensionalen Raum. Die gemeinsame Struktur ist geprägt durch drei selbstständige und auf einander aufbauenden Algorithmen, nämlich der schnellen Fourier-Transformation (FFT), der nicht äquidistanten schnellen Fourier-Transformation (NFFT) und der NFFT-basierten Teilchen-Gitter-Methode (P²NFFT). Für jeden dieser Algorithmen werden Verbesserungen und parallele Implementierungen vorgestellt mit besonderem Augenmerk auf massiv paralleler Skalierbarkeit. Im Kontext der FFT werden parallele Algorithmen aus den Hardware adaptiven Modulen der FFTW Softwarebibliothek zusammengesetzt. Die neuen NFFT-Konzepte beinhalten abgeschnittene NFFT, Versatz, analytische Differentiation und optimierte Entfaltung im Fourier-Raum bezüglich des mittleren quadratischen Aliasfehlers. Mit Hilfe dieser Verallgemeinerungen bietet die NFFT einen vereinheitlichten Zugang zu Teilchen-Gitter-Methoden. Insbesondere gemischt periodische Randbedingungen werden einheitlich behandelt und Versatz wird effizienter umgesetzt. Heuristiken für die Parameterwahl werden auf Basis sorgfältiger Fehlerabschätzungen angegeben
Kelly, Jodie. "Topics in the statistical analysis of positive and survival data". Thesis, Queensland University of Technology, 1998.
Znajdź pełny tekst źródłaNestler, Franziska. "Efficient Computation of Electrostatic Interactions in Particle Systems Based on Nonequispaced Fast Fourier Transforms". Universitätsverlag der Technischen Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A23376.
Pełny tekst źródłaDie vorliegende Arbeit widmet sich der Berechnung elektrostatischer Wechselwirkungen in Partikelsystemen, was beispielsweise im Bereich der molekulardynamischen Simulationen eine zentrale Rolle spielt. Um die dafür benötigten physikalischen Größen mit lediglich O(N log N) arithmetischen Operationen zu berechnen, nutzen sogenannte Teilchen-Gitter-Methoden die Ewald-Summation sowie die schnelle Fourier-Transformation (FFT). Typischerweise können derartige Verfahren Systeme von Punktladungen unter periodischen Randbedingungen in allen Raumrichtungen handhaben. Periodizität ist jedoch nicht immer bezüglich aller drei Dimensionen erwünscht. Des Weiteren spielen auch Wechselwirkungen zu Dipolen in vielen Anwendungen eine wichtige Rolle. Zentraler Gegenstand dieser Arbeit ist die Partikel-Partikel-NFFT Methode (P²NFFT), ein Teilchen-Gitter-Verfahren, welches auf der schnellen Fouriertransformation für nichtäquidistante Daten (NFFT) basiert. Eine Erweiterung dieses Verfahrens auf gemischt periodische sowie offene Randbedingungen wird vorgestellt. Außerdem wird die Methode für die Behandlung von Partikelsystemen, in denen sowohl Ladungen als auch Dipole vorliegen, angepasst. Somit wird erstmalig ein effizienter Algorithmus für gemischte Ladungs-Dipol-Systeme präsentiert, der zusätzlich die Behandlung sämtlicher Arten von Randbedingungen mit einem einheitlichen Zugang erlaubt. Entsprechende Fehlerabschätzungen sowie Strategien für die Parameterwahl werden entwickelt und anhand numerischer Beispiele verifiziert.
Промович, Юрій Бориславович, Юрий Бориславович Промович i Y. B. Promovych. "Математичне моделювання струму в об’єктах з неоднорідностями та методи їх біполярної електроімпедансної томоґрафії з підвищеною точністю". Thesis, Тернопільський національний технічний університет ім. Івана Пулюя, 2013. http://elartu.tntu.edu.ua/handle/123456789/2393.
Pełny tekst źródłaВ дисертації розв’язано наукову задачу удосконалення математичної моделі траєкторій струму в м’яких тканинах з новоутвореннями для отримання достатньої точності реконструкції розподілу електричної провідності за даними біполярної ЕІТ. Для цього використано апріорні відомості про параметри тканин, а також введено поправку систематичної похибки вимірювання напруг. Встановлено, що відомі методи реконструкції розподілу провідності, які використовують зворотне проектування, не враховують взаємодії електричного струму з неоднорідним за провідністю середовищем. Для біполярної електроімпедансної томоґрафії побудовано метод реконструкції зображення, який полягає у зворотному проектуванні проекційних даних уздовж ліній максимальної густини електричного струму. Також побудовано модель систематичної похибки вимірювання електричного імпедансу томоґрафом для формування поправки, ефективність застосування якої підтверджена на реальних даних ТЕ. Метод реконструкції та модель систематичної похибки верифіковано з використанням імітаційної моделі та експериментального макета системи для електроімпедансної томоґрафії, побудованого на кафедрі «Біотехнічні системи» ТНТУ. Математичні моделі застосовано при побудові алґоритмів реконструкції, натурного та імітаційного моделювання ЕІТ.
В диссертации решено научную задачу усовершенствования математической модели траекторий тока в мягких тканях с новообразованиями с целью получения достаточной точности реконструкции распределения электрической плотности за данными биполярной электроимпедансной томографии (ЭИТ). Для этого использовано априорные данные о параметрах тканей, а также введено поправку систематической ошибки измерения напряжений. Установлено, что известные методы реконструкции, которые используют интегральные преобразования, не учитывают взаимодействия электрического тока с неоднородной за проводимостью средой. Для биполярной ЭИТ построено метод реконструкции изображения, в котором обратное проецирование осуществляется вдоль линий максимальной плотности электрического тока. Также построено математическую модель поправки систематической ошибки измерения электрического импеданса томографом для формирования поправки, эффективность использования которой подтверждена на реальных данных ТЭ. Для метода реконструкции и модели систематической ошибки провели верификацию с использованием компьютерной имитационной модели и экспериментального макета системы для ЭИТ, разработанного на кафедре «Биотехнические системы» ТНТУ. Математические модели использовано при построении алгоритмов реконструкции и имитационного моделирования ЭИТ.
The dissertation is focused on the improvement of methods and means of mathematical and computer modeling of image reconstruction in bipolar electrical impedance tomography (EIT). For a bipolar electrical impedance tomography the method of reconstruction of image is improved. This back projection along the lines of maximal electric current density method is used. The reconstruction method can be divided into three stages. The first stage of the method is the construction of the electric potential field for an empiric environment . Electric potential for the pair electrodes and is finded from the differential equation , , , , n – normal vector to boundary ; ( ) and ( ) - places of electrodes connected. On the second stage for every electrodes pair we build the line of the maximal electric current density. For the task of maximal current density line finding variations method was used. Along the maximal current density line in the area the power scattering is maximal and, assume, determine the difference potential between the electrodes pair. The realization of the third stage foresees the measured data filtration and back projection on an area . The mathematical model of an electrical impedance measurement systematic error of a tomograph is also worked up. The error of measurement in EIT contains the random and systematic components. The random component error by the insignificant electrode contact loss with the surface of a conducting body conditioned. A systematic error is the hardware features arrangement of a tomograph measurement transducer. As a rule every electrode to a measuring transducer of the impedance tomograph via one key such multiplexer is connected. When the resistance of a conducting body is approximately equal to resistance of a multiplexer open channel a substantial source of error is appear. The resistance of the opened channel of multiplexer is the source of the systematic error . The one realisation the tomographic experiment in the calibration mode as a is bounded stochastic sequence observable values of resistances -y pair of multiplexer keys ( ). The adequate model of signals from synchronous multiplexer systems is the stochastic sequence of class , which in the energy theory of casual signals. The estimation of the mathematical expectation of the stationary component and will be that functional for systematic error decreasing in tomographic experiment. Using of the energy theory of stochastic signals for the in-phase analysis of the ensemble of tomographic experiment realization a signal-error to build purpose as element of negative feedback for the input circuit of the impedance tomograph. Efficiency a mathematical model on the tomographic experiment (TE) real data confirms. For the method of reconstruction and model of systematic error verification with the use of imitation model and an experimental model system for electrical impedance tomography implemented. The model with a data as from a test conductivity distribution image of a flat section conducting body is used. The result of imitation design is a sequence of voltage falling values for each of formally certain pair of measuring electrodes. Experimental model system on a department "Biotechnical systems" of the Ternopil National Ivan Pul’uj Technical University designed. The constructed mathematical models for the realizing of the reconstruction algorithms and EIT imitation design are used.
Helwig, Wolfram Hugo. "Multipartite Entanglement: Transformations, Quantum Secret Sharing, Quantum Error Correction". Thesis, 2014. http://hdl.handle.net/1807/44114.
Pełny tekst źródła"On general error cancellation based logic transformations: the theory and techniques". Thesis, 2011. http://library.cuhk.edu.hk/record=b6075487.
Pełny tekst źródłaThesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 113-120).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Liang, Zhi-Hong, i 梁志鴻. "Design of Timing-Error-Tolerant Digital Filters for Various Filter Transformations". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/4c2ycw.
Pełny tekst źródła國立東華大學
電機工程學系
102
In modern VLSI design, especially in system-on-chip, the number of transistors in a single chip keeps increasing thanks to the advance of chip manufacturing technology. However, as the feature size of modern chips shrinks, the circuits become more and more susceptible to noise, wire delay, and soft errors. One of these main problems is timing errors which are caused by process variation, device aging, etc. Such timing error problems can cause system failures. Hence, it is an important issue to solve the timing error problem while maintaining the performance of a chip. This thesis proposes various transformation designs for VLSI digital filters for tolerating multiple timing errors. We have developed a design methodology for VLSI digital filters, which can detect and tolerate multiple timing errors on-line. In order to achieve high performance of the digital filters, different transformations for various digital filter designs are applied. According to the design requirements, we choose the appropriate transformation for the filter in order to improve the performance, while it can still tolerate multiple timing errors. We have applied our techniques to two example digital signal filter designs, including a FIR filter and an IIR filter. Four examples for each circuit are studied and evaluated. We have implemented them using cell-based design flow on TSMC manufacturing technology. The implementation results show that our designs achieve high performance and tolerance of multiple timing errors for digital filters with reasonable cost.
Sachan, Kapil. "Constrained Adaptive Control of Nonlinear Systems with Application to Hypersonic Vehicles". Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4415.
Pełny tekst źródłaWu, Jyh Horng, i 吳志鴻. "An Error-Free Euclidean Distance Transformation". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/77575564889671198189.
Pełny tekst źródła國立中山大學
電機工程研究所
83
Distance transformation is a fundamental technique for the application fields of image understanding and computer vision. Some important characteristics in image analysis such as shape factor, skeleton and medial axis are based upon the distance transformation computation. Euclidean distance is by all means the most natural and realistic distance and is thus the most demanded distance for the above applications. Discrete Euclidean distance has a different geometry issue from city- block distance or chessbload distance since the Euclidean distance does not agree the length along the grid points as the and distances do. As a result an absolutely accurate Euclidean distance transformation can be just obtained from a global approach which is time consuming and memory costy. In constrast, city-block distance and chessboard distace distances can be computed by a local approach which usually takes two scans: one for top-dowm and the other for bottom-up. In this research, an algorithm for computing Euclidean distance is developed by the above computation structure for efficiency. Our success of the local approach of Euclidean distance transformation is based upon the design of a candidates look-up table, which combines Euclidean geometry and a local candidates table implementation. The understanding of geometry specifies the global candidates which need to be selected for the shortest distance. The global candidates is transformed to a local array by a vertical projection. In addition to the fast computation by our local approach, our memory space for improving the look-up table is also economic by exploiting a concept of hashing function, which reduces a 4-variables to 2-variables look-up table. Finally, a proof by mathematic induction is presented for a guarantee the absolute accuracy of our computation of Euclidean distance.
Chang, Hon-Hang, i 張閎翰. "Irregular Image Transformation and Balanced Error Distribution for Video Seam Carving". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/66233893384606489158.
Pełny tekst źródła國立中央大學
資訊工程學系
105
In recent years, more and more image retargeting techniques have been proposed to facilitate our daily life, in particular those based on the use of seam carving, warping or the combination of them. Although these techniques are rather sophisticated, they only work for a specific pattern during retargeting. For example, these techniques can only retarget the source picture into the same shape of a square, but cannot be easily reshaped into a circular, a polygon or other shapes. This paper focuses on creating a graphics editing system, named CMAIR (Content and Mask-Aware Image Retargeting), for image retargeting, which can retarget the source images into diff erent shapes of image to highlight the salient objects of primary interest. CMAIR eff ectively supports removal of unimportant pixels, and frames as many surrounding objects inside the provided mask as possible. In this paper, we propose a unique irregular interpolation method to produce four possible target images, and an evaluation mechanism to decide the best candidate image as the final output with the consideration of image saliency. The results show that not only the source image can be placed into different targeted shapes of mask, but also the salient objects are retained and highlighted as much as possible. As a result, the salient objects become more clear in our eyes. Besides, a video retargeting method named BED is proposed in this study, and the proposed algorithm in this thesis focuses on maintaining the structure of straight lines and irregular shape of objects without deforming complex image contents, which may be altered in traditional seam carving of complex image or video. In addition, the proposed mechanism maintains visual continuity such that the resulting videowill not look shaky due to sudden changes in the background. Practical applicability of the proposed method was tested using both regular videos and special videos which contain vanishing lines (i.e., perspective eff ects). Experimental results demonstrate that the proposed CMAIR can convert the rectangular image to another irregular shape of image, and all the salient objects of the targeted images are preserved. In our video results using BED, the BED approach not only resizes the video with the retention of important information, but also maintains the structural properties of objects in varios kinds of videos. The demonstration website at http://video.minelab.tw/DETS/ EDSeamCarving/ and http://video.minelab.tw/DETS/VRDSF/ provides comparison results that illustrate the contribution of our method.
Amirian, Ghasem [Verfasser]. "Transformation of tracking error in parallel kinematic machining / von Ghasem Amirian". 2008. http://d-nb.info/991997352/34.
Pełny tekst źródłaChung, Leng Hung, i 鐘年宏. "THE COORDINATE TRANSFORMATION AND ERROR ANALYSIS COMPENSATION FOR FIVE FACE MACHINING CENTER". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/39264222687526367409.
Pełny tekst źródłaLo, Chien-Tai, i 羅建台. "Utilizing Adaptive Time-Variant Transformation Model for Mobile Platform Positioning Error Adjustment". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/52609724022850632394.
Pełny tekst źródła國立臺灣大學
土木工程學研究所
102
A mobile mapping system (MMS) utilizes the global navigation satellite system (GNSS) and inertial navigation system (INS) techniques and thus makes possible a direct georeferencing solution everywhere along its surveyed path. It is capable of acquiring a vast number of spatial information in an efficient manner and is adopted in a wide variety of applications. Nevertheless, when the GNSS signal is obstructed, its positioning solution can only count on the INS observables, which possesses significant and cumulative errors across time. In order to compensate for the position error in a GNSS-denied area, a time-variant adjustment model is developed in this study. Moreover, an adaptive algorithm is proposed to improve the efficiency and reliability of the error adjustment analysis. Based on the results from a case study, it is demonstrated that the positioning error of a mobile mapping platform in an urban area could reach a level of several meters due to GNSS signal obstructions. However, when the proposed approach is applied, the error can be significantly reduced to a centimeter-level. As a result, the applicability of the mobile mapping technique can be further extended to a GNSS-hostile area where current methods are limited.
Ming-HsunFu i 符明勛. "Non-iterative Method of Seven-Parameter Similarity Transformation and Gross Error Detection". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/jtak2w.
Pełny tekst źródłaLee, Hung-Shin, i 李鴻欣. "Classification Error-based Linear Discriminative Feature Transformation for Large Vocabulary Continuous Speech Recognition". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/6w7r9s.
Pełny tekst źródła國立臺灣師範大學
資訊工程研究所
97
The goal of linear discriminant analysis (LDA) is to seek a linear transformation that projects an original data set into a lower-dimensional feature subspace while simultaneously retaining geometrical class separability. However, LDA cannot always guarantee better classification accuracy. One of the possible reasons lies in that its criterion is not directly associated with the classification error rate, so that it does not necessarily accommodate itself to the allocation rule governed by a given classifier, such as that employed in automatic speech recognition (ASR). In this thesis, we extend the classical LDA by leveraging the relationship between the empirical phone classification error rate and the Mahalanobis distance for each respective phone class pair. To this end, we modify the original between-class scatter from a measure of the Euclidean distance to the pairwise empirical classification accuracy for each class pair, while preserving the lightweight solvability and taking no distributional assumption, just as what LDA does. Furthermore, we also present a novel discriminative linear feature transformation, named generalized likelihood ratio discriminant analysis (GLRDA), on the basis of the likelihood ratio test (LRT). It attempts to seek a lower dimensional feature subspace by making the most confusing situation, described by the null hypothesis, as unlikely to happen as possible without the homoscedastic assumption on class distributions. We also show that the classical linear discriminant analysis (LDA) and its well-known extension – heteroscedastic linear discriminant analysis (HLDA) are just two special cases of our proposed method. The empirical class confusion information can be further incorporated into GLRDA for better recognition performance. Experimental results demonstrate that our approaches yields moderate improvements over LDA and other existing methods, such as HLDA, on the Chinese large vocabulary continuous speech recognition (LVCSR) task.
"General transformation model with censoring, time-varying covariates and covariates with measurement errors". Thesis, 2008. http://library.cuhk.edu.hk/record=b6074722.
Pełny tekst źródłaCensoring is an intrinsic part in survival analysis. In this thesis, we establish the asymptotic properties of MMLE to general transformation models when data is subject to right or left censoring. We show that MMLE is not only consistent and asymptotically normal, but also asymptotically efficient. Thus our asymptotic results give a definite answer to a long-term argument on the efficiency of the maximum marginal likelihood estimator. The difficulty in establishing these results comes from the fact that the score function derived from the marginal likelihood does not have ordinary independence or martingale structure. We will develop a discretization method in establishing our results. As a special case, our results imply the consistency, asymptotic normality and efficiency for the multinomial probit regression, a popular alternative to the Cox regression model.
General transformation model is an important family of semiparametric models in survival analysis which generalizes the linear transformation model. It not only includes typical Cox regression model, proportional odds model and multinomial probit regression model, but also includes heteroscedastic hazard regression model, general heteroscedastic rank regression model and frailty model. By maximizing the marginal likelihood, a parameter estimation (MMLE) can be obtained with the property that it avoids estimating the baseline survival function and censoring distribution, and such property is enjoyed by the Cox regression model. In this thesis, we study three areas of generalization of general transformation models: main response variable is subject to censoring, covariates are time-varying and covariates are subject to measurement error.
In medical studies, the covariates are not always the same during the whole period of study. Covariates may change at certain time points. For example, at the beginning, n patients accept drug A as treatment. After certain percentage of patients have died, the investigator might add new drug B to the rest of the patients. This corresponds to the case of time-varying covariates. In this thesis, we propose an estimation procedure for the parameters in general transformation model with this type of time-varying covariates. The results of extensive simulations show that our approach works well.
Wu, Yueqin.
Adviser: Ming Gao Gu.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3589.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 74-78).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Genga, Yuval Odhiambo. "Modifications to the symbol wise soft input parity check transformation decoding algorithm". Thesis, 2016. http://hdl.handle.net/10539/20590.
Pełny tekst źródłaReed-Solomon codes are very popular codes used in the field of forward error correction due to their correcting capabilities. Thus, a lot of research has been done dedicated to the development of decoding algorithms for this class of code. [Abbreviated Abstract. Open document to view full version]
Chang, Chia Lin, i 張嘉麟. "Transformation of Data Points Taken from Different Coordinate Measuring Systems under the Influence of Measuring Error". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/33369304536057600406.
Pełny tekst źródła大葉大學
工業工程與科技管理學系
95
ABSTRACT In many 3D designs computer models cannot be completed in once due to the vast artifact volume and the measuring limitation of digitizing machine. To accommodate the shortage of that, in this research RHINO and Microscribe are combined to digitize the artifacts of human head and hand in a section-by-section measurements; afterwards a coordinate measurements transformation is applied and put all the measurements from all sections into one common coordinate system. It then followed by taking the characteristics points in the artifacts and their counterparts in the computer model. A distance matrix is formed among the characteristics points for these two (original artifact and computer model) and the difference between the matrices is extracted thereby a signal-to-noise ratio, SN ratio, is used as a criterion to evaluate the match between them. A study on the relationship between the SN ratios and the number of characteristics points is also investigated.
Chang, Yi Ching, i 張依靜. "Strengthen Geography Transformation and Forgery Resistance of Fragile Watermarking with 3D Model by Combining Error-Correcting Code and Cryptography". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/54956488676099949340.
Pełny tekst źródła國立中興大學
資訊科學與工程學系
105
Multimedia is text, audio, two-dimensional and three-dimensional image, two-dimensional and three-dimensional video, and three-dimensional mesh collectively. Since the technology of computer has grown, there are more and more examples that people take an existed multimedia from Internet and claim it is their work after making a little alteration. Innovation and creativity are important for products. Therefore, copyright is an issue that has become importance and needs to be protected. There are some literature on the subject. The algorithms were designed on blind fragile watermarking in this thesis. Utilize error correcting code and cryptography to enhance the attack verification ability. The objective is to protect the copyright of three-dimensional models. In this research, the meshes were used from a public database. These meshes refer to the virtual three-dimensional space in a computer by the point, line, surface, triangle, curve, and other geometric graphics and color design combination of virtual three-dimensional polygon modeling. The watermark can be used for detecting malicious modification and marking out the area from an attacked stego media. Therefore, the watermark should be embedded into all over the cover media as much as possible. However, embedding information will make some alterations on the media. It is tolerable if it cannot be noticed under human senses. The distortion ratio of each model is shown in experimental results. This research can be divided into three parts. All of them are designed for protecting the polygonal meshes on computer. Experimental results show that they are efficient in reality applications. (a)Extract data bits from a cover model and encrypted by error correcting code. Considered the output of the coding procedure is a watermark, which is embedded into the cover model itself. The advantage is that it can achieve lower time complexity than previous literature, control distortion ratio bellow 10-6, contain100% embedding capacity, and get efficient verification. (b)Use cryptography, Feedback Mode, to encrypt a cover model. Extract a vertex as a plaintext, and consider the cyphertext as a watermark, which is embedded in the last significant bit (LSB) under decimal six or less after convert into binary. The advantage is that replace the LSBs can control the distortion rate. In the experiment results show that this algorithm can have high watermark embedding rate, high detect efficiency, low distortion rate and even more, higher security. (c)A three-dimensional model has three coordinates (X,Y,Z). A spherical coordinate system (r,φ,θ) combine information with vertex, distance, and angle unit. Interaction between this three information can increase the tenacity of the algorithm. So that users can use a non-malicious way to moderately modify the appearance of a model, such as, scanning, zooming, and shifting, which is called affine transformation or geographic transformation. A geographic transformation is a mathematical operation that converts the coordinates of a point in one geographic coordinate system to the coordinates of the same point in another geographic coordinate system. A three-dimensional model is referred to virtual three-dimensional space in a computer by point, line, and surface. In a computer, a model is recorded by Cartesian coordinate system. Convert an input model from Cartesian coordinate system into spherical coordinate system in the very beginning. Then apply the first part and the second part encoding algorithm respectively by using a referenced vertex. The experiment results show that spherical coordinate system can resist affine attack efficient and still achieve high malicious attack detection rate.
Chang, Neng-Hsuan, i 張能軒. "A Study for Influence of Height Error on 3-D & 2-D Coordinate Transformation Precision in Large Region". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/62017731462052171205.
Pełny tekst źródła中興大學
土木工程學系所
99
To explore the effect of height error on the coordinate transformation accuracy of three dimensional and two dimensional coordinates between TWD97 and TWD67 on the basis of Taiwan geodetic datum TWD97, this paper conducts the conversion between TWD97 and TWD67 on the second-order satellite control points in central Taiwan according to the seven-parameter transformation model of Molodensky-Badekas and Bursa-Wolf. Without the ellipsoid height information of TWD67, this paper chooses the common points to ensure the ellipsoid height of TWD67 and of TWD97 is the same before conducting the conversion. There are two methods applied in this paper to compare the ellipsoid height of the common points. The first method is to return the ellipsoid height of both TWD67 and TWD97 at the common points to zero and change the ellipsoid height of TWD67 leading the variation to a constant (±10、±20、±25M). The second method is to change the ellipsoid height of TWD67 with random variables to create constants (±1~3、±3~5、±5~10M) for comparing the effect of height variation on the conversion accuracy of three dimensional and two dimensional coordinates between TWD97 and TWD67. The result shows that the seven-parameter transformation model of Molodensky-Badekas and Bursa-Wolf applied in this study presents the same conversion result. The conversion results of TWD67 and TWD97 with the same height, method one (changing the ellipsoid height of TWD67), and method two (changing the ellipsoid height within 3-5M) hardly affect the plane coordinate, with only ± 1cm root-mean-square error of (dN,dE) and dh root-mean-square error is about the average of the height variation for the common points.
Lai, Yi-Chun, i 賴怡君. "Examining the Effects of Safety-specific Transformational Leadership and Psychological Safety on Safety Voice and Safety Compliance: A Moderating Role of Error Climate". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/s96xf5.
Pełny tekst źródła國立交通大學
管理科學系所
106
This study investigated the antecedents of safety voice and clarified how to improve safety compliance. Building on social exchange theory, I proposed that safety-specific transformational leadership increases safety voice, leading to positive safety compliance. Psychological safety as a group level factor can be related to enhanced safety voice. Furthermore, through signaling theory, error climate can moderate the relationship between safety voice and safety compliance. To test the above assumptions, data were collected from supervisors (N = 23) accordingly with their subordinates (N = 196) within group units of a semiconductor technology company in Hsinchu Science Park (Taiwan). Results were consistent with my hypotheses that (a) safety-specific transformational leadership was significant-positively related to safety voice; (b) safety voice was positively related to safety compliance; (c) the effect of safety-specific transformational leadership on safety compliance was mediated by safety voice; (d) psychological safety was positively related to safety voice; (e) the influence of safety voice on safety compliance was moderated by error climate, such that the direct effect of safety voice on safety compliance was more positive when error climate was high than when error climate was low. Therefore, organizations can work on developing safe workplace by safety leadership and safety-related climates for expected safety outcomes.
Jee, Kangkook. "On Efficiency and Accuracy of Data Flow Tracking Systems". Thesis, 2015. https://doi.org/10.7916/D8MG7P9D.
Pełny tekst źródłaJiao, Yibo. "Compensation-oriented quality control in multistage manufacturing processes". Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-08-5961.
Pełny tekst źródłatext
Vercueil, Megan. "A select bouquet of leadership theories advancing good governance and business ethics: a conceptual framework". Thesis, 2020. http://hdl.handle.net/10500/27355.
Pełny tekst źródłaBusiness Management
D.B.L.