Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Estimation of parameters tool.

Dissertationen zum Thema „Estimation of parameters tool“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Estimation of parameters tool" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Chen, Xiaoming. „The development of a parameter estimation tool towards fault diagnosis“. The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1399563299.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Valkonen, Laura Elina. „The Sunyaev-Zel'dovich effect in galaxy clusters as a tool for estimating cosmological parameters“. Thesis, University of Sussex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487558.

Der volle Inhalt der Quelle
Annotation:
Clusters of galaxies provide us with a sensitive probe with which to study the Universe. Their mass function is strongly dependent on the cosmological parameters, which govern the dynamical evolution of the Universe and they also provide a representative sample of ~he Universal matter distribution. The Sunyaev-Zel'dovich effect (SZE) is a promising method for detecting clusters out to their form.ation redshift and has also been shown to be a good estimator for cluster masses, A combination. of X-ray and SZE data can also be used to measure the dista:nce to the cluster, independently of the cosmic distance ladder, allowing a measurement of the Hubble Constant. However, the success of SZE methods is highly dependent on a detailed understanding of the physics of galaxy clusters. We have undertaken a multi-wavelength survey of 8 ga:laxy clusters, the Viper Sunyaev-Zel'dovich Survey (VSZS), in order to assess and highlight the issues which may be encountered by upcoming large scale SZE surveys. Such surveys will not be able to study individual clus- . ters in great detail and will be reliant on the ·accuracy of scaling relations and assumed cluster models. \Ve have therefore imaged each cluster in our sample simultar;eously at three frequencies (150GHz, 220GHz and 280GHz) with the Arcminute Cosmology Bolometer Array Receiver (ACBAR), and have followed-up with X-ray observations (Chandra and XMM-Newton) and some optical observations (Gemini), in order to carry out a detailed analysis of the cluster ICM structure. We have made some of the highest significance detections of the SZE to date. Several clusters were detected at two frequencies, as a temperature increment at 280 GHz and a decrement at 150 GHz and some of these clusters were also resolved by the observations. Most of the VSZS sample were detected as SZE signals for the first time. Although Abell 3667 and lE0657-56 had been detected previously, these were now detected at two frequencies for the first time. We have added the results of the four fully analyzed VSZS clusters to the Y- T relation of Bonamente et al. (2007) and have found our points to lie well within the scatter of the relation, except fOr cluster A3112, which has possible radio source contamination. We have also found that cluster temperatures estimated from the Y - T relation are better overall at tracing the X-ray spectral temperature than the Lx - T derived temperatures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sokrut, Nikolay. „The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment Management“. Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Verbeek, Benjamin. „Maximum Likelihood Estimation of Hyperon Parameters in Python : Facilitating Novel Studies of Fundamental Symmetries with Modern Software Tools“. Thesis, Uppsala universitet, Institutionen för materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446041.

Der volle Inhalt der Quelle
Annotation:
In this project, an algorithm has been implemented in Python to estimate the parameters describing the production and decay of a spin 1/2 baryon - antibaryon pair. This decay can give clues about a fundamental asymmetry between matter and antimatter. A model-independent formalism developed by the Uppsala hadron physics group and previously implemented in C++, has been shown to be a promising tool in the search for physics beyond the Standard Model (SM) of particle physics. The program developed in this work provides a more user-friendly alternative, and is intended to motivate further use of the formalism through a more maintainable, customizable and readable implementation. The hope is that this will expedite future research in the area of charge parity (CP)-violation and eventually lead to answers to questions such as why the universe consists of matter. A Monte-Carlo integrator is used for normalization and a Python library for function minimization. The program returns an estimation of the physics parameters including error estimation. Tests of statistical properties of the estimator, such as consistency and bias, have been performed. To speed up the implementation, the Just-In-Time compiler Numba has been employed which resulted in a speed increase of a factor 400 compared to plain Python code.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Miró, Roig Antoni. „DYNAMIC MATHEMATICAL TOOLS FOR THE IDENTIFICATION OF REGULATORY STRUCTURES AND KINETIC PARAMETERS IN“. Doctoral thesis, Universitat Rovira i Virgili, 2014. http://hdl.handle.net/10803/284043.

Der volle Inhalt der Quelle
Annotation:
En aquesta tesi presentem una metodologia sistemàtica la qual permet caracteritzar sistemes biològics dinàmics a partir de dades de series temporals. Del treball desenvolupat se’n desprenen tres publicacions. En la primera desenvolupem un mètode d’optimització global determinista basat en l’outer approximation per a la estimació de paràmetres en sistemes biològics dinàmics. El nostre mètode es basa en la reformulació d’un conjunt d’equacions diferencials ordinàries al seu equivalent algebraic mitjançant l’ús de mètodes de col•locació ortogonal, donant lloc a un problema no convex programació no lineal (NLP). Aquest problema no convex NLP es descompon en dos nivells jeràrquics: un problema master de programació entera mixta (MILP) que proporciona una cota inferior rigorosa al solució global, i una NLP esclau d’espai reduït que dóna un límit superior. L’algorisme itera entre aquests dos nivells fins que un criteri de terminació es satisfà. En les publicacions segona i tercera vam desenvolupar un mètode que és capaç d’identificar l’estructura regulatòria amb els corresponents paràmetres cinètics a partir de dades de series temporals. En la segona publicació vam definir un problema d’optimització dinàmica entera mixta (MIDO) on minimitzem el criteri d’informació d’Akaike. En la tercera publicació vam adoptar una perspectiva MIDO multicriteri on minimitzem l’ajust i complexitat simultàniament mitjançant el mètode de l’epsilon constraint on un dels objectius es tracta com la funció objectiu mentre que la resta es converteixen en restriccions auxiliars. En ambdues publicacions els problemes MIDO es reformulen a programació entera mixta no lineal (MINLP) mitjançant la col•locació ortogonal en elements finits on les variables binàries s’utilitzem per modelar l’existència d’interaccions regulatòries.
En esta tesis presentamos una metodología sistemática que permite caracterizar sistemas biológicos dinámicos a partir de datos de series temporales. Del trabajo desarrollado se desprenden tres publicaciones. En la primera desarrollamos un método de optimización global determinista basado en el outer approximation para la estimación de parámetros en sistemas biológicos dinámicos. Nuestro método se basa en la reformulación de un conjunto de ecuaciones diferenciales ordinarias a su equivalente algebraico mediante el uso de métodos de colocación ortogonal, dando lugar a un problema no convexo de programación no lineal (NLP). Este problema no convexo NLP se descompone en dos niveles jerárquicos: un problema master de programación entera mixta (MILP) que proporciona una cota inferior rigurosa al solución global, y una NLP esclavo de espacio reducido que da un límite superior. El algoritmo itera entre estos dos niveles hasta que un criterio de terminación se satisface. En las publicaciones segunda y tercera desarrollamos un método que es capaz de identificar la estructura regulatoria con los correspondientes parámetros cinéticos a partir de datos de series temporales. En la segunda publicación definimos un problema de optimización dinámica entera mixta (MIDO) donde minimizamos el criterio de información de Akaike. En la tercera publicación adoptamos una perspectiva MIDO multicriterio donde minimizamos el ajuste y complejidad simultáneamente mediante el método del epsilon constraint donde uno de los objetivos se trata como la función objetivo mientras que el resto se convierten en restricciones auxiliares. En ambas publicaciones los problemas MIDO se reformulan a programación entera mixta no lineal (MINLP) mediante la colocación ortogonal en elementos finitos donde las variables binarias se utilizan para modelar la existencia de interacciones regulatorias.
In this thesis we present a systematic methodology to characterize dynamic biological systems from time series data. From the work we derived three publications. In the first we developed a deterministic global optimization method based on the outer approximation for parameter estimation in dynamic biological systems. Our method is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. In the second and third publications we developed a method that is able to identify the regulatory structure and its corresponding kinetic parameters from time series data. In the second publication we defined a mixed integer dynamic optimization problem (MIDO) which minimize the Akaike information criterion. In the third publication, we adopted a multi-criteria MIDO which minimize complexity and fit simultaneously using the epsilon constraint method in which one objective is treated as the objective function while the rest are converted to auxiliary constraints. In both publications MIDO problems were reformulated to mixed integer nonlinear programming (MINLP) through the use of orthogonal collocation on finite elements where binary variables are used to model the existence of regulatory interactions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Murray, Paul. „Extensions of the hit-or-miss transform for feature detection in noisy images and a novel design tool for estimating its parameters“. Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=17198.

Der volle Inhalt der Quelle
Annotation:
The work presented in this thesis focuses on extending a transform from Mathematical Morphology, known as the Hit-or-Miss transform (HMT), in order to make it more robust for detecting features of interest in the presence of noise in digital images. The extension that is described here requires that a single parameter is determined for correct functionality. A novel design tool which allows this parameter to be accurately estimated is proposed as part of this work. An efficient method for computing the extended transform is also presented. The HMT is a well known morphological transform that is capable of identifying features in digital images. When image features contain noise, texture or some other distortion, the HMT may fail. Various researchers have extended the HMT in different ways to make it more robust to noise. The most successful, and most recent extensions of the HMT for noise robustness, use rank order operators in place of standard morphological erosions and dilations. A major issue with most of these methods is that no technique is provided for calculating the parameters that are introduced to generalise the HMT, and, in most cases, these parameters are determined empirically. In this thesis, a new conceptual interpretation of the HMT is presented which uses percentage occupancy (PO) functions to implement the erosion and dilation operators of the HMT. When implemented in this way, the strictness of these PO functions can easily be relaxed in order to allow slacker fitting of the structuring elements. Relaxing the strict conditions of the transform is shown to improve the performance of the routine when processing noisy data. This thesis also introduces a novel design tool which is derived directly from the operators that are used to implement the aforementioned PO functions. This design tool can be used to determine a suitable value for the only parameter in the proposed extension of the HMT. Further, it can be used to estimate parameters for other generalisations of the HMT that have been described in the literature in order to improve their noise robustness. The power of the proposed technique is demonstrated and tested using sets of very noisy images. Further, a number of comparisons are performed in order to validate the method that is introduced in this work when compared with the most recent extensions of the HMT. One drawback with this method is that a direct implementation of the technique is computationally expensive. However, it is possible to implement the proposed method using rank-order filters in place of the percentage occupancy functions. Rank order filters are used in a multitude of image processing tasks. Their application can range from simple pre-processing tasks which aim to reduce/remove noise, to more complex problems where such filters can be used in combination to detect and segment image features. There is, therefore, a need to develop fast algorithms to compute the output of this class of filter in general. A number of methods for efficiently computing the output of specific rank order filters have been presented over the years. For example, numerous fast algorithms exist that can be used for calculating the output of the median filter. Fast algorithms for calculating morphological erosions and dilations - which, like the median filter, are a special case of the more general rank order filter - have also been proposed. In this thesis, these techniques are extended and combined such that it is possible to efficiently compute any rank, using any arbitrarily shaped window, making it possible to quickly compute the output of any rank order filter. The fast algorithm which is described is compared to an optimised technique for computing the output of this class of filter, and significant gains in speed are demonstrated when using the proposed technique. Further, it is shown that this efficient filtering algorithm can be used to produce an extremely fast implementation of the generalised HMT that is described in this work. The fast generalised HMT is compared with a number of other extensions and generalisations of the HMT that have been proposed in the literature over the years.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sahin, Haci Bayram. „Analysing Design Parameters Of Hydroelectric Power Plant Projects To Develop Cost Decision Models By Using Regresion And Neural Network Tools“. Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611462/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Energy is increasingly becoming more important in today&rsquo
s world. Ascending of energy consumption due to development of technology and dense population of earth causes greenhouse effect. One of the most valuable energy sources is hydro energy. Because of limited energy sources and excessive energy usage, cost of energy is rising. There are many ways to generate electricity. Among the electricity generation units, hydroelectric power plants are very important, since they are renewable energy sources and they have no fuel cost. Electricity is one of the most expensive input in production. Every hydro energy potential should be considered when making investment on this hydro energy potential. To decide whether a hydroelectric power plant investment is feasible or not, project cost and amount of electricity generation of the investment should be precisely estimated. This study is about cost estimation of hydroelectric power plant projects. Many design parameters and complexity of construction affect the cost of hydroelectric power plant projects. In this thesis fifty four hydroelectric power plant projects are analyzed. The data set is analyzed by using regression analysis and artificial neural network tools. As a result, two cost estimation models have been developed to determine the hydroelectric power plant project cost in early stage of the project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Guerrero, José-Luis. „Robust Water Balance Modeling with Uncertain Discharge and Precipitation Data : Computational Geometry as a New Tool“. Doctoral thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-190686.

Der volle Inhalt der Quelle
Annotation:
Models are important tools for understanding the hydrological processes that govern water transport in the landscape and for prediction at times and places where no observations are available. The degree of trust placed on models, however, should not exceed the quality of the data they are fed with. The overall aim of this thesis was to tune the modeling process to account for the uncertainty in the data, by identifying robust parameter values using methods from computational geometry. The methods were developed and tested on data from the Choluteca River basin in Honduras. Quality control of precipitation and discharge data resulted in a rejection of 22% percent of daily raingage data and the complete removal of one out of the seven discharge stations analyzed. The raingage network was not found sufficient to capture the spatial and temporal variability of precipitation in the Choluteca River basin. The temporal variability of discharge was evaluated through a Monte Carlo assessment of the rating-equation parameter values over a moving time window of stage-discharge measurements. Al hydrometric stations showed considerable temporal variability in the stage-discharge relationship, which was largest for low flows, albeit with no common trend. The problem with limited data quality was addressed by identifying robust model parameter values within the set of well-performing (behavioral) parameter-value vectors with computational-geometry methods. The hypothesis that geometrically deep parameter-value vectors within the behavioral set were hydrologically robust was tested, and verified, using two depth functions. Deep parameter-value vectors tended to perform better than shallow ones, were less sensitive to small changes in their values, and were better suited to temporal transfer. Depth functions rank multidimensional data. Methods to visualize the multivariate distribution of behavioral parameters based on the ranked values were developed. It was shown that, by projecting along a common dimension, the multivariate distribution of behavioral parameters for models of varying complexity could be compared using the proposed visualization tools. This has a potential to aid in the selection of an adequate model structure considering the uncertainty in the data. These methods allowed to quantify observational uncertainties. Geometric methods have only recently begun to be used in hydrology. It was shown that they can be used to identify robust parameter values, and some of their potential uses were highlighted.
Modeller är viktiga verktyg för att förstå de hydrologiska processer som bestämmer vattnets transport i landskapet och för prognoser för tider och platser där det saknas mätdata. Graden av tillit till modeller bör emellertid inte överstiga kvaliteten på de data som de matas med. Det övergripande syftet med denna avhandling var att anpassa modelleringsprocessen så att den tar hänsyn till osäkerheten i data och identifierar robusta parametervärden med hjälp av metoder från beräkningsgeometrin. Metoderna var utvecklade och testades på data från Cholutecaflodens avrinningsområde i Honduras. Kvalitetskontrollen i nederbörds- och vattenföringsdata resulterade i att 22 % av de dagliga nederbördsobservationerna måste kasseras liksom alla data från en av sju analyserade vattenföringsstationer. Observationsnätet för nederbörd befanns otillräckligt för att fånga upp den rumsliga och tidsmässiga variabiliteten i den övre delen av Cholutecaflodens avrinningsområde. Vattenföringens tidsvariation utvärderades med en Monte Carlo-skattning av värdet på parametrarna i avbördningskurvan i ett rörligt tidsfönster av vattenföringsmätningar. Alla vattenföringsstationer uppvisade stor tidsvariation i avbördningskurvan som var störst för låga flöden, dock inte med någon gemensam trend. Problemet med den måttliga datakvaliteten bedömdes med hjälp av robusta modellparametervärden som identifierades med hjälp av beräkningsgeometriska metoder. Hypotesen att djupa parametervärdesuppsättningar var robusta testades och verifierades genom två djupfunktioner. Geometriskt djupa parametervärdesuppsättningar verkade ge bättre hydrologiska resultat än ytliga, var mindre känsliga för små ändringar i parametervärden och var bättre lämpade för förflyttning i tiden. Metoder utvecklades för att visualisera multivariata fördelningar av välpresterande parametrar baserade på de rangordnade värdena. Genom att projicera längs en gemensam dimension, kunde multivariata fördelningar av välpresterande parametrar hos modeller med varierande komplexitet jämföras med hjälp av det föreslagna visualiseringsverktyget. Det har alltså potentialen att bistå vid valet av en adekvat modellstruktur som tar hänsyn till osäkerheten i data. Dessa metoder möjliggjorde kvantifiering av observationsosäkerheter. Geometriska metoder har helt nyligen börjat användas inom hydrologin. I studien demonstrerades att de kan användas för att identifiera robusta parametervärdesuppsättningar och några av metodernas potentiella användningsområden belystes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sowgath, Md Tanvir. „Neural network based hybrid modelling and MINLP based optimisation of MSF desalination process within gPROMS : development of neural network based correlations for estimating temperature elevation due to salinity, hybrid modelling and MINLP based optimisation of design and operation parameters of MSF desalination process within gPROMS“. Thesis, University of Bradford, 2007. http://hdl.handle.net/10454/10998.

Der volle Inhalt der Quelle
Annotation:
Desalination technology provides fresh water to the arid regions around the world. Multi-Stage Flash (MSF) distillation process has been used for many years and is now the largest sector in the desalination industry. Top Brine Temperature (TBT) (boiling point temperature of the feed seawater in the first stage of the process) is one of the many important parameters that affect optimal design and operation of MSF processes. For a given pressure, TBT is a function of Boiling Point Temperature (BPT) at zero salinity and Temperature Elevation (TE) due to salinity. Modelling plays an important role in simulation, optimisation and control of MSF processes and within the model, calculation of TE is therefore important for each stages (including the first stage, which determines the TBT). Firstly, in this work, several Neural Network (NN) based correlations for predicting TE are developed. It is found that the NN based correlations can predict the experimental TE very closely. Also predictions of TE by the NN based correlations were found to be good when compared to those obtained using the existing correlations from the literature. Secondly, a hybrid steady state MSF process model is developed using gPROMS modelling tool embedding the NN based correlation. gPROMS provides an easy and flexible platform to build a process flowsheet graphically. Here a Master Model connecting (automatically) the individual unit model (brine heater, stages, etc.) equations is developed which is used repeatedly during simulation and optimisation. The model is validated against published results. Seawater is the main source raw material for MSF processes and is subject to seasonal temperature variation. With fixed design the model is then used to study the effect of a number of parameters (e.g. seawater and steam temperature) on the freshwater production rate. It is observed that, the variation in the parameters affect the rate of production of fresh water. How the design and operation are to be adjusted to maintain a fixed demand of fresh water through out the year (with changing seawater temperature) is also investigated via repetitive simulation. Thirdly, with clear understanding of the interaction of design and operating parameters, simultaneous optimisation of design and operating parameters of MSF process is considered via the application MINLP technique within gPROMS. Two types of optimisation problems are considered: (a) For a fixed fresh water demand throughout the year, the external heat input (a measure of operating cost) to the process is minimised; (b) For different fresh water demand throughout the year and with seasonal variation of seawater temperature, the total annualised cost of desalination is minimised. It is found that seasonal variation in seawater temperature results in significant variation in design and some of the operating parameters but with minimum variation in process temperatures. The results also reveal the possibility of designing stand-alone flash stages which would offer flexible scheduling in terms of the connection of various units (to build up the process) and efficient maintenance of the units throughout the year as the weather condition changes. In addition, operation at low temperatures throughout the year will reduce design and operating costs in terms of low temperature materials of construction and reduced amount of anti-scaling and anti-corrosion agents. Finally, an attempt was made to develop a hybrid dynamic MSF process model incorporating NN based correlation for TE. The model was validated at steady state condition using the data from the literature. Dynamic simulation with step changes in seawater and steam temperature was carried out to match the predictions by the steady state model. Dynamic optimisation problem is then formulated for the MSF process, subjected to seawater temperature change (up and down) over a period of six hours, to maximise a performance ratio by optimising the brine heater steam temperature while maintaining a fixed water demand.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Beek, Jaap van de. „Estimation of synchronization parameters“. Licentiate thesis, Luleå tekniska universitet, Signaler och system, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16971.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the estimation of synchronization parameters in {Orthogonal Frequency Division Multiplexing} (OFDM) communication systems and in active ultrasonic measuring systems. Estimation methods for the timing and frequency offset and for the attenuation taps of the frequency selective channel are presented and investigated.In OFDM communication systems the estimation of the timing offset of the transmitted data frame is one important parameter. This offset provides the receiver with a means of synchronizing its sampling clock to that of the transmitter. A second important parameter is the offset in the carrier frequency used by the receiver to demodulate the received signal.For OFDM systems using a cyclic prefix, the joint {Maximum Likelihood} (ML) estimation of the timing and carrier frequency offset is introduced. The redundancy introduced by the prefix is exploited optimally. This novel method is derived for a non-dispersive channel. Its performance, however, is also evaluated for a frequency-selective Rayleigh-fading radio channel. Time dispersion causes an irreducible error floor in this estimator's performance. This error floor is the limiting factor for the applicability of the timing estimator. Depending on the requirements, it may be used in either an acquisition or a tracking mode. For the frequency estimator the error floor is low enough to allow for stable frequency tracking.A low-complex variant of the timing offset estimator is presented allowing a simple implementation. This is the ML estimator, given a 2-bit representation of the received signal as the sufficient statistics. Its performance is evaluated for a frequency-selective Rayleigh-fading radio channel and for a twisted-pair copper channel. Simulations show this estimator to have a similar error floor as the full resolution ML estimator.The problem of estimating the propagation time of a signal is also of interest in active pulse echo systems, such as are used in, {\it e.g.}, radar, medical imaging, and geophysics. The {Minimum Mean Squared Error} (MMSE) estimator of arrival time is derived and investigated for an active airborne ultrasound measurement system. Besides performing better than the conventional {\it Maximum a Posteriori} (MAP) estimator, this method can be used to develop different estimators in situations where the system Signal to Noise Ratio (SNR) is unknown.Coherent multi-amplitude OFDM receivers generally need to compensate for a frequency selective channel in order to detect transmitted data symbols reliably. For this purpose, a channel equalizer needs to be fed estimates of the subchannel attenuations.The linear MMSE estimator of these attenuations is presented. Of all linear estimators, this estimator optimally makes use of the frequency correlation between the subchannel attenuations. Low-complex modified estimators are proposed and investigated. The proposed modifications cause an irreducible error floor for this estimator's performance, but simulations show that for SNR values up to 20~dB, the improvement of a modified estimator compared to the Least Squares (LS) estimator is at least 3~dB.
Godkänd; 1996; 20080328 (ysko)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Richter, Andreas. „Estimation of radio channel parameters“. Ilmenau : ISLE, 2005. http://deposit.d-nb.de/cgi-bin/dokserv?idn=981051421.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Jhunjhunwala, Manish. „Software tool for reliability estimation“. Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=1801.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains x, 125 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 72-74).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Mateo, Rosado Yamily. „Développement d'un outil de détermination de cinétiques par microcalorimétrie différentielle en flux continu : application aux réactions catalytiques hétérogènes“. Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR39.

Der volle Inhalt der Quelle
Annotation:
Cette thèse propose une nouvelle méthodologie expérimentale basée sur la microcalorimétrie différentielle à balayage (DSC) en configuration en flux continu, pour la détermination des paramètres cinétiques dans les réactions chimiques. La méthodologie permet d’obtenir une estimation rapide et précise de ces paramètres en mesurant la chaleur libérée ou absorbée au cours de la réaction. L’utilisation d’un calorimètre DSC à flux continu permet de travailler avec de petites quantités d’échantillon et de réaliser la réaction dans la zone de mesure, ce qui offre l’avantage de mesurer précisément la puissance thermique générée, tout en contrôlant les conditions de température et de pression. La méthodologie détaille toutes les étapes à suivre et dégage les points critiques à examiner et valider pour que l’étude cinétique soit valable. Cette méthodologie a été appliquée à la réaction d’hydrogénation catalytique du CO₂ en méthane, un processus exothermique rapide, en utilisant un catalyseur Ni/Al₂O₃. Cette réaction a permis d’évaluer l’importance du choix du modèle de réacteur pour l’obtention des paramètres cinétiques en considérant la zone de mesure comme un réacteur parfaitement agite continu (RPAC) ou un réacteur piston. L’estimation des paramètres cinétiques, tels que les facteurs pré-exponentiels, les énergies d’activation et les ordres de réaction, a été réalisée au moyen d’un algorithme génétique qui minimise la différence entre la puissance thermique expérimentale et la puissance thermique calculée. La méthodologie permet donc de sélectionner parmi plusieurs modèles cinétiques le plus représentatif des résultats expérimentaux. La méthodologie intègre aussi l’analyse de la sensibilité des paramètres vis-à-vis de l’exactitude du modèle cinétique. Les résultats obtenus montrent un bon accord entre les données expérimentales et les prédictions du modèle, validant l’efficacité de cette méthodologie pour l’analyse cinétique des réactions hétérogènes en flux continu. Cette approche représente une alternative efficace aux méthodes traditionnelles, ouvrant de nouvelles possibilités dans l’optimisation des procédés catalytiques et l’étude des systèmes complexes. Cette thèse présente un outil innovant basé sur la calorimétrie différentielle en flux continu, capable de générer des données cinétiques fiables, facilitant les progrès dans le domaine de la catalyse et de la cinétique chimique
This thesis proposes a new experimental methodology based on differential scanning calorimetry (DSC) in a continuous flow configuration, for the determination of kinetic parameters in chemical reactions. The methodology provides a rapid and accurate estimate of these parameters by measuring the heat released or absorbed during the reaction. The use of a continuous-flow DSC calorimeter enables us to work with small quantities of sample and to carry out the reaction in the measurement zone, offering the advantage of accurately measuring the heat power generated, while controlling temperature and pressure conditions. The methodology details all the steps to be followed and outlines the critical points to be examined and validated if the kinetic study is to be valid. This methodology was applied to the catalytic hydrogenation of CO₂ to methane, a rapid exothermic process, using a Ni/Al₂O₃ catalyst. This reaction was used to assess the importance of the choice of reactor model for obtaining the kinetic parameters by considering the measurement zone as a Continuous Stirred Tank Reactor (CSTR) or a plug flow reactor. The kinetic parameters, such as pre-exponential factors, activation energies and reaction orders, were estimated using a genetic algorithm that minimizes the difference between the experimental thermal power and the calculated thermal power. The methodology therefore makes it possible to select from several kinetic models the one that is most representative of the experimental results. The methodology also includes an analysis of the sensitivity of the parameters to the accuracy of the kinetic model. The results obtained show good agreement between the experimental data and the model predictions, validating the effectiveness of this methodology for the kinetic analysis of heterogeneous reactions in continuous flow. This approach represents an effective alternative to traditional methods, offering new possibilities in the optimization of catalytic processes and the study of complex systems. This thesis presents an innovative tool based on differential calorimetry in continuous flow, capable of generating reliable kinetic data, facilitating progress in the field of catalysis and chemical kinetics
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Salehpour, Soheil. „Applied estimation of piecewise constant parameters“. Doctoral thesis, Luleå tekniska universitet, Signaler och system, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16945.

Der volle Inhalt der Quelle
Annotation:
A time-varying linear system is a realistic description of many industrial processes, and nonlinear behavior can then also be accounted for. Then, we can consider a linear system with time-varying parameters as the model uncertainty, \emph{e.g.} an AR(X) model or an affine input-output approximation. In this thesis, we seek to estimate these parameters of the linear time-varying system for two purposes: 1) As uncertainty bounds for use in robust control. 2) Fault detection and isolation.Robustness is a necessary property of a control system in an industrial environment, due to changes of the process such as changes of material quality, aging of equipment, replacing of instrument, manual operation (\emph{e.g.} a valve that is opened or closed) etc. The uncertainties associated with the nominal process model is a concern in most approaches to robust control. One purpose of this research is to achieve a bound of the uncertainty by using a set of measurement data.Change detection is a quite active field, both in research and applications. Faults occur in almost all systems, and change detection often has the aim to locate the fault occurrence in time and to raise an alarm. Examples of faults in an industry are leakage of a valve, clogging of a valve or faults in measurement instruments. The second purpose of this research is thus to detect faults under the assumption that these are manifested as abrupt parameter changes.Many time-varying changes or faults of industrial processes can be described as abrupt changes in parameters. The approach is to model them as piecewise constant parameters which results a sparse structure of their derivative. This quality serves as a cost and regularization on flexibility for parameter estimation.Sparsity can be approximated in different ways, \emph{e.g.} with $l_q$-norm for $q\leq1$. We present several online methods to estimate piecewise constant parameters, based on these approximators. As an application, the parameters of a pump in the flue gas desulphurization process at the Luossavaara-Kiirunavaara AB (LKAB) facility in Malmberget, Sweden, are estimated for the purpose of detecting if the pump is coated or worn.We also present an exact solution of maximization of sparsity by using MILP (Mixed Integer Linear Programming) to minimize the number of non-zero elements in a matrix or vector. The metod is used as a regularization to estimate the time-varying parameters of an AR(X) model. We specifically apply it to detect faults in a blender's hinged-outflow valve which is included in the pelletization of LKAB. Proper function of this valve is essential for the mixing of bentonite and slurry and thus for the quality of the iron ore pellets. Simulation with measurement data from the LKAB facility at Malmberget, Sweden, shows the viability of the algorithm.We consider a time-varying time-delay first-order process model. The gain, time-constant and time-delay are considered as uncertainties in this example. An estimate of the perturbations is produced based on the MILP method. The Pad\'-approximation and orthogonal collocation method are used to approximate the delay. An overhead crane with uncertain parameters is also used as an illustrative example.

Godkänd; 2011; 20110829 (soheil); DISPUTATION Opponent: Professor Alexander Medvedev, Institutionen för informationsteknologi, Uppsala universitet, Uppsala Ordförande: Professor Thomas Gustafsson, Institutionen för system- och rymdteknik, Luleå tekniska universitet, Luleå Tid: Fredag den 30 september 2011, kl 10.00 Plats: A109, Luleå tekniska universitet


Modellering av komplexa dynamiska system
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Miller, Eric Lawrence. „Statistical estimation of atmospheric transmission parameters“. Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/74845.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
Includes bibliographical references (leaves 150-156).
by Eric Lawrence Miller.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Larsson, Cahlin Sofia. „Real-Time Estimation of Aerodynamic Parameters“. Thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-127467.

Der volle Inhalt der Quelle
Annotation:
Extensive testing is performed when a new aircraft is developed. Flight testing is costly and time consuming but there are aspects of the process that can be made more efficient. A program that estimates aerodynamic parameters during flight could be used as a tool when deciding to continue or abort a flight from a safety or data collecting perspective. The algorithm of such a program must function in real time, which for this application would mean a maximum delay of a couple of seconds, and it must handle telemetric data, which might have missing samples in the data stream. Here, a conceptual program for real-time estimation of aerodynamic parameters is developed. Two estimation methods and four methods for handling of missing data are compared. The comparisons are performed using both simulated data and real flight test data. The first estimation method uses the least squares algorithm in the frequency domain and is based on the chirp z-transform. The second estimation method is created by adding boundary terms in the frequency domain differentiation and instrumental variables to the first method. The added boundary terms result in better estimates at the beginning of the excitation and the instrumental variables result in a smaller bias when the noise levels are high. The second method is therefore chosen in the algorithm of the conceptual program as it is judged to have a better performance than the first. The sequential property of the transform ensures functionality in real-time and the program has a maximum delay of just above one second. The four compared methods for handling missing data are to discard the missing data, hold the previous value, use linear interpolation or regard the missing samples as variations in the sample time. The linear interpolation method performs best on analytical data and is compared to the variable sample time method using simulated data. The results of the comparison using simulated data varies depending on the other implementation choices but neither method is found to give unbiased results. In the conceptual program, the variable sample time method is chosen as it gives a lower variance and is preferable from an implementational point of view.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Lucyshyn, Robert. „Estimation of inertial parameters of robotic manipulators“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0019/NQ44500.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Aboussouan, Patrick. „Frequency response estimation of manipulator dynamic parameters“. Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65927.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

倪鴻文 und Hung-man Ngai. „Estimation of parameters in incomplete compositional data“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31208836.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ildiz, Faith. „Estimation of motion parameters from image sequences“. Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28176.

Der volle Inhalt der Quelle
Annotation:
The image motion analysis algorithms that generate the two dimensional velocity of objects in a sequence of images are developed. The algorithms considered consist of: the parallel extended kalman filter method; the spatiotemporal gradient methods; the spatiotemporal frequency methods; and the one-dimensional FFT methods. These algorithms are designed to perform on low signal to noise ratio images. Each of these algorithms is applied to a sequence of computer generated images with varying signal to noise ratios. Simulations are used to evaluate the performance of each algorithm. (Author)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Fu, Zhe. „Parameters estimation with coprime samplers and arrays“. Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4027.

Der volle Inhalt der Quelle
Annotation:
Les réseaux et les capteurs éparse attirent de plus en plus l'attention en raison de leur capacité à augmenter les DOFs. La DOA ou la fréquence des signaux peut être estimée avec peu de capteurs d'antenne ou quelques échantillons sub-Nyquist collectés. Dans cette thèse, nous nous concentrons sur une structure bien reconnue, la configuration coprime, pour estimer la DOA ou la fréquence des signaux. Nous étudions d'abord l'échantillonnage coprime pour l'estimation de la fréquence et nous avons mis en évidence le phénomène de perte de propriété en diagonale à cause duquel l'estimation échoue totalement. Pour remédier à ce problème, nous proposons un mécanisme i ntroduisant un délai aléatoire pour garantir l'efficacité des méthodes basées sur l'échantillonnage coprime. Ensuite, nous développons également un schéma d'échantillonnage coprime à taux multiples afin d'utiliser pleinement les informations contenues dans les échantillons. En plus de l'estimation de la fréquence, nous travaillons également sur l'estimation de la DOA avec un réseau coprime. Nous réorganisons la structure des réseaux coprimes pour augmenter encore les DOFs sans introduire de coût matériel supplémentaire. Puis, nous introduisons des cumulants de quatrième ordre dans l'estimation de la DOA active avec le radar MIMO coprime. Finalement, nous optimisons la géométrie du radar MIMO en utilisant des cumulants de quatrième ordre et les DOFs peuvent être augmentés de manière significative
Sparse array and sparse sensing attract increasing attention due to their capability to increase the DOFs. The DOA or the frequency of signals can be estimated with few antenna sensors or few collected sub-Nyquist samples. In this dissertation, we focus on one of the most recognized sparse structures, coprime configuration, to estimate the DOA or the frequency of signals. We first investigate the coprime sampling for frequency estimation and come across with a diagonal property loss phenomenon for which the estimation totally fails. To address this problem, we propose a random delay based mechanism to ensure the effectiveness of coprime sampling based methods. Then we also develop a multi-rate coprime sampling scheme to fully utilize the information brought by the coprime sampling. Apart from the frequency estimation, we also work on the DOA estimation with coprime array. We rearrange the coprime array structure to further increase the DOFs without introducing additional hardware cost. Then we introduce fourth order cumulants in active DOA estimation with coprime MIMO radar. The DOFs of MIMO radar can be enhanced by adopting the fourth order cumulants. Finally, we optimize the MIMO radar geometry using fourth order cumulants and the DOFs can be significantly increased
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Mincarelli, Diego. „Parameters and state estimation for switched systems“. Thesis, Lille 1, 2013. http://www.theses.fr/2013LIL10146.

Der volle Inhalt der Quelle
Annotation:
Les systèmes hybrides sont largement étudiés pour modéliser des systèmes issus de différents domaines de l’ingénierie. De façon générale, l’évolution des systèmes hybrides est caractérisée par la combinaison de dynamiques continues et d’événements discrets. Les exemples de systèmes hybrides incluent les réseaux, les systèmes multi-agents, certains dispositifs mécaniques ou les convertisseurs de puissance. Les recherches sur les systèmes hybrides couvrent tous les champs de la théorie du contrôle tels que l'analyse de la stabilité ou les problèmes de commande et d'observation. Dans le contexte des systèmes à commutation, qui sont une classe particulière de systèmes hybrides, cette thèse vise à étudier le problème lié à l'extraction d'informations sur les paramètres et l'état du système à partir des mesures disponibles. Ceci peut être motivé par divers objectifs: la modélisation, la commande, la détection de fautes ainsi que leur identification pour la sécurité de systèmes. Pour ces raisons, l'identification et l'observation sont au cœur des problèmes de décision et contrôle. La première partie de la thèse est consacrée à étendre l'applicabilité des méthodes basées sur l'algèbre pour l'estimation de paramètre constant en ligne, développées par l'équipe projet INRIA - Non-A, pour le cas des systèmes à paramètres constants par morceaux. À cette fin, une procédure pour l'estimation des paramètres et des temps de commutation est développée dans le cadre des systèmes à commutation. Une telle approche permet une estimation algébrique simultanée des paramètres et instants de changement. La nouveauté et l'efficacité des algorithmes proposées pour l'identification se situent principalement dans leur nature non asymptotique. La deuxième partie de la thèse aborde le problème de construction d'observateur pour estimer l'état discret ainsi que l'état continu des systèmes à commutation. Étant donné que les systèmes hybrides sont régis par des dynamiques à la fois continues et discrètes, la difficulté du problème d’observation s’en trouve augmentée. Par exemple, il faut gérer les discontinuités inhérentes au système, les estimées désirées doivent être obtenues relativement rapidement (entre deux commutations ou événements discrets). Ainsi, nous proposons un observateur basé sur des techniques en temps fini (modes glissants) pour la reconstruction de l'état continu et du signal de commutation (état discret) en temps fini. Enfin, nous traitons une autre classe de systèmes à commutation où les paramètres, dans chaque sous-système, sont variables dans le temps. Pour ce type de modèles, appelés switched linear parameter varying systems, nous construisons un estimateur de l'état discret en utilisant des techniques d'identification de paramètres
Hybrid systems have been widely studied in the literature and became a powerful tool for modeling systems coming from many engineering fields. A common definition of hybrid systems is a combination of both continuous-time and discrete event systems. Examples of hybrid systems include networks, multi-agent systems, mechanical devices, robot path planning, biological systems. Researches on hybrid systems cover all fields of control theory such as stability analysis, control and observation problems or supervision. In the context of switched systems, which is a particular class of hybrid systems, this thesis aims at studying the problem related to extracting information about the system parameters and the state from the knowledge of the output.This study is motivated by various purposes: modeling, monitoring, fault detection and identification for the systems safety, output feedback control. For those reasons,the identification and the observation are at the core of decision and control problems.The first part of the thesis is devoted to extend the applicability of the algebra-based methods for on-line constant parameter estimation, developed by INRIA – Non-A project-team, to the case of systems with piecewise constant parameters. To this end, a procedure for the estimation of the parameters and the switching times is developed in the framework of switched systems.Such an approach enables a simultaneous algebraic estimation of both parameters and change time instants. The novelty and efficiency of the proposed identification algorithms mainly lie in their non asymptotic nature. The second part of the thesis addresses the problem of observer design for estimating the discrete and the continuous state of switched systems.Since switched systems contain a family of continuous-time systems and discrete-event systems, the evolution of their dynamics is naturally non-smooth, and this increases the difficulties to solve the observation problem. For instance, the estimates have to be provided before the next switch takes place. Thus, we propose an observer based on finite-time techniques (sliding-mode based) for the reconstruction of the continuous states and the switching signal (discrete state) in finite-time. Finally, we deal with another class of switched systems where the parameters, in each subsystem, are time-varying. For this kind of models, called switched linear parameter varying systems, we design an estimator for reconstructing the discrete state, by using parameter identification techniques
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Longa-Peña, Penélope Alejandra. „Orbital parameters estimation for compact binary stars“. Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/75554/.

Der volle Inhalt der Quelle
Annotation:
Most stars in the Galaxy are found in multiple systems of two or more stars orbiting together. Two stars orbiting around their centre of mass are called binary stars. In close binary stars, the evolution of one star affects its companion and evolutionary expansion of one star allows for mass exchange between the components. In most cases, the material from the less massive star forms an accretion disc around the heavier companion that has evolved into a compact stellar remnant, the final state of stellar evolution. We call these systems compact binary stars (CBs). The study of CBs is key to the development of two fundamental phenomena: accretion and evolution of binary stars. Statistical information on CBs can be deduced by extracting common properties and characteristic system parameter distributions from observed data. But, despite being fundamental for a wide range of astronomical phenomena, our comprehension of their formation and evolution is still poor, mainly because of the limited knowledge of crucial orbital parameters. This lack of reliable orbital parameters estimation is mainly due to observational handicaps, namely, the accretion disc outshines the system components. Astronomers have developed different techniques to overcome this, but are often very dependant of the signal to noise ratio of the data or are only able to obtain via target of opportunity programs (wait until the target is brighter). The focus of this work is to test and develop techniques, based on indirect imaging methods, that can overcome the main observational handicaps to estimate orbital parameters of CBs. We combine these techniques with the exploitation of more “exotic” emission lines that trace the irradiated face of the donor star, namely Ca II NIR triplet and the Bowen blend. We made use of empirical properties of Doppler tomography to estimate the values of the phase zero Á0 and the velocity of the irradiated face of the secondary star (Kem). We then used synthetic models accounting for an irradiated secondary to fit our measured Kem and perform a K-correction to derive the radial velocity of the secondary K2. To derive K1, we used the centre of symmetry technique, testing its validity among several emission lines and the stability of the results depending on the selected area. Having strong constraints for K1 and K2, we find estimates for the mass ratio q. Furthermore, we developed a variation from the Doppler tomography secondary emission method to constrain the value of the systemic velocity ƴ. We derive meaningful uncertainties of these parameters with the bootstrap technique. Using these techniques, we have successfully set dynamical constraints on the radial velocities of the binary components of CBs and derived fundamental orbital parameters, including the mass ratio, using basic properties of Doppler tomography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Schmid, Beat. „Sun photometry, a tool for monitoring atmospheric parameters /“. Bern : [s.n.], 1995. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Schneider, Gary David. „A requirements specification software cost estimation tool“. Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9952.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Kumar, Hemant, University of Western Sydney und of Science Technology and Environment College. „Software analytical tool for assessing cardiac blood flow parameters“. THESIS_FSTA_XXX_Kumar_H.xml, 2001. http://handle.uws.edu.au:8081/1959.7/392.

Der volle Inhalt der Quelle
Annotation:
Introduction of Doppler ultrasound techniques into the Intensive Care setting has revolutionised the way haemodynamic status is monitored in the critically ill. However, in order to increase the usefulness of these techniques, the Doppler signal and its spectrum need to be further analysed in ways to facilitate a better clinical response. Extensive processing of the Doppler spectrum on Diagnostic ultrasound machines is limited by the real time performance considerations. It was therefore proposed that the spectral information from these systems be extracted off-line and full set of analytical tools be made available to evaluate this information. This was achieved by creating an integrated and modular software tool called Spectron, which was intended as an aid in the overall management of the patients. The modular nature of Spectron was intended to ensure that new analytical tools and techniques could be easily added and tested. The software provides its users with considerable latitude in choosing various data acquisition and analysis parameters to suit various clinical situations and patient requirements. Spectron was developed under the Windows environment to provide a user friendly interface and to address a range of programming problems such as memory management and the size of the colour palettes. Spectron is able to detect the maximal velocities and compute the mean and median velocities. Relative increases in maximal velocities in cardiac blood flows after the administration of inotropic drugs have been shown in the pilot studies that were conducted. Spectron is able to help in obtaining estimates of the aortic blood flows and in other applications such measuring vascular impedance. Stenotic blood flows can be detected by using the spectral broadening index and blood flow characteristics can be studied by using various blood flow indices. Thus, this project attempted to help in patient management by providing clinicians with a range of blood flow parameters and has succeeded in meeting its objective to a large extent
Master of Engineering (Hons)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Torralbo, Pilar Vicaria. „Optimized Automatic Calibration Tool for Flight Test Analogue Parameters“. International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596389.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
The calibration processes consume a big quantity of resources: equipment and people, time and cost. As the number of calibration points increase the resources increase in the same extent. This automatic tool, aimed to reduce these resources, has been designed for commanding, managing and analyzing in real time a large number of acquired data points coming from the specimen under calibration and the standards used in the calibration process, applying at the same time the metrological algorithms which validate the calibration point. Its greatest achievement is the implementation of the rules for accepting or discarding the data point and the level of automation of the process. In the last flight test campaign its usage has been crucial for providing on time the data with the high accuracy required. It was achieved the commissioning of almost 200 temperature parameters in a short period of time taking advantage of equipment which nominal accuracy was not high enough for their direct application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kumar, Hemant. „Software analytical tool for assessing cardiac blood flow parameters“. Thesis, View thesis, 2001. http://handle.uws.edu.au:8081/1959.7/392.

Der volle Inhalt der Quelle
Annotation:
Introduction of Doppler ultrasound techniques into the Intensive Care setting has revolutionised the way haemodynamic status is monitored in the critically ill. However, in order to increase the usefulness of these techniques, the Doppler signal and its spectrum need to be further analysed in ways to facilitate a better clinical response. Extensive processing of the Doppler spectrum on Diagnostic ultrasound machines is limited by the real time performance considerations. It was therefore proposed that the spectral information from these systems be extracted off-line and full set of analytical tools be made available to evaluate this information. This was achieved by creating an integrated and modular software tool called Spectron, which was intended as an aid in the overall management of the patients. The modular nature of Spectron was intended to ensure that new analytical tools and techniques could be easily added and tested. The software provides its users with considerable latitude in choosing various data acquisition and analysis parameters to suit various clinical situations and patient requirements. Spectron was developed under the Windows environment to provide a user friendly interface and to address a range of programming problems such as memory management and the size of the colour palettes. Spectron is able to detect the maximal velocities and compute the mean and median velocities. Relative increases in maximal velocities in cardiac blood flows after the administration of inotropic drugs have been shown in the pilot studies that were conducted. Spectron is able to help in obtaining estimates of the aortic blood flows and in other applications such measuring vascular impedance. Stenotic blood flows can be detected by using the spectral broadening index and blood flow characteristics can be studied by using various blood flow indices. Thus, this project attempted to help in patient management by providing clinicians with a range of blood flow parameters and has succeeded in meeting its objective to a large extent
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Kumar, Hemant. „Software analytical tool for assessing cardiac blood flow parameters /“. View thesis, 2001. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030724.122149/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Hatch, Nicholas Adam. „Radar based estimation of asymmetric target inertial parameters“. Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-041406-103944/.

Der volle Inhalt der Quelle
Annotation:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.
Keywords: estimation, radar, asymmetric, inertial, dynamics, rigid body, exo-atmospheric, free body. Includes bibliographical references. (p.150-154)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Sjödén, Therese. „Electromagnetic Modelling for the Estimation of Wood Parameters“. Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2210.

Der volle Inhalt der Quelle
Annotation:

Spiral grain in trees causes trouble to the wood industry, since boards sawn from trees with large grain angle have severe problems with form stability. Measurements of the grain angle under bark enable the optimisation of the refining process. The main objective of this thesis is to study the potential in estimating the grain angle by using microwaves. To do this, electromagnetic modelling and sensitivity analysis are combined.

The dielectric properties of wood are different along and perpendicular to the wood fibres. This anisotropy is central for the estimation of the grain angle by means of microwaves. To estimate the grain angle, measurements are used together with electromagnetic modelling for the scattering from plane surfaces and cylinders. Measurement set-ups are proposed to determine the material parameters, such as the grain angle, for plane boards and cylindrical logs. For cylindrical logs both near-field and far-field measurements are investigated. In general, methods for determining material parameters exhibit large errors in the presence of noise. In this case, acceptable levels of these errors are achieved throug using few material parameters in the model: the grain angle and two dielectric parameters, characterising the electrical properties parallel and perpendicular to the fibres.

From the case with plane boards, it is concluded that it is possible to make use of the anisotropy of wood to estimate the grain angle from the reflected electromagnetic field. This property forms then the basis of the proposed methods for the estimation of the grain angle in cylindrical logs. For the proposed methods, a priori knowledge of the moisture content or temperature of the wood is not needed. Furthermore, since the anisotropy persist also for frozen wood, the method is valid for temperatures below zero degrees Celsius.

For the case with cylindrical logs, sensitivity analysis is applied to the near-field as well as the far-field methods, to analyse the parameter dependence with respect to the measurement model and the errors introduced by noise. In this sensitivity analysis, the Cram\'{e}r-Rao bound is used, giving the best possible variance for estimating the parameters. The levels of the error bounds are high, indicating a problematic estimation problem. However, the feasibility of accurate estimation will be improved through higher signal-to-noise ratios, repeated measurements, and better antenna gain. The sensitivity analysis is also useful as an analytical tool to understand the difficulties and remedies related to the method used for determining material parameters, as well as a practical aid in the design of a measurement set-up.

According to the thesis, grain angle estimation is possible with microwaves. The proposed methods are fast and suitable for further development for in-field use in the forest or in saw mills.


Träd med växtvridenhet orsakar problem i träindustrin eftersom brädor som sågats från träd med stor fibervinkel har problem med formstabiliteten och vrider sig då de torkas. Mätning av fibervinkeln under bark möjliggör optimering av förädlingsprocessen. I den här avhandlingen kombineras elektromagnetisk modellering och känslighetsanalys för att undersöka möjligheterna att bestämma fibervinkeln med mikrovågor.

De elektriska egenskaperna hos trä är olika längs med och vinkelrätt mot fibrerna. Den här anisotropin är utgångspunkten för att bestämma fibervinkeln med hjälp av mikrovågor. För att skatta fibervinkeln används mätningar tillsammans med elektromagnetisk modellering för spridningen från plana ytor och cylindrar. Mätuppställningar föreslås för problemet att skatta materialparametrar, såsom fibervinkeln, i plana brädor och cylindriska stockar. För cylindriska stockar undersöks både närfälts- och fjärrfältsmätningar. I allmänhet har metoder för skattning av materialparametrar stora fel då systemet innehåller brus. Här erhålls acceptabla fel genom att använda få materialparametrar i modelleringen. De materialparametrar som används är fibervinkeln och två dielektriska parametrar som karakteriserar de elektriska egenskaperna längs med och vinkelrätt mot träfibern.

Slutsatsen från fallet med plana brädor är att det är möjligt att använda anisotropin hos trä och dess påverkan på ett reflekterat elektromagnetiskt fält för att skatta fibervinkeln. Detta är grunden i de metoder som föreslås för cylindriska stockar. För samtliga metoder så gäller att varken fukthalt eller temperatur behöver vara kända på förhand. Eftersom anisotropin kvarstår också för fruset trä så är metoderna användbara även för temperaturer under noll grader Celsius.

För fallet med cylindriska stockar används känslighetsanalys på både närfälts- och fjärrfältsmetoderna för att analysera parameterberoendet i uppmätt data samt felen som introduceras av brus. I den här känslighetsanalysen används Cram\'{e}r-Rao gränsen som ger den bästa möjliga variansen för skattning av parametrarna. Nivåerna på gränserna är höga vilket indikerar att det är ett svårt estimeringsproblem. Möjligheterna att skatta parametrarna noggrant förbättras genom bättre signal-brus förhållande, upprepade mätningar samt ökad antennstyrka. Känslighetsanalysen är också användbar som ett analytiskt verktyg för ökad förståelse för problem och möjligheter relaterade till metoden för att skatta parametrarna och som ett praktiskt stöd för design av en mätuppställning.

Enligt avhandlingen är skattning av fibervinkel möjlig med mikrovågor. De föreslagna metoderna är snabba och lämpliga att utveckla vidare för användning i skogen eller i sågverk.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Unsal, Derya. „Estimation Of Deterministic And Stochastic Imu Error Parameters“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614059/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Inertial Measurement Units, the main component of a navigation system, are used in several systems today. IMU&rsquo
s main components, gyroscopes and accelerometers, can be produced at a lower cost and higher quantity. Together with the decrease in the production cost of sensors it is observed that the performances of these sensors are getting worse. In order to improve the performance of an IMU, the error compensation algorithms came into question and several algorithms have been designed. Inertial sensors contain two main types of errors which are deterministic errors like scale factor, bias, misalignment and stochastic errors such as bias instability and scale factor instability. Deterministic errors are the main part of error compensation algorithms. This thesis study explains the methodology of how the deterministic errors are defined by 27 state static and 60 state dynamic rate table calibration test data and how those errors are used in the error compensation model. In addition, the stochastic error parameters, gyroscope and bias instability, are also modeled with Gauss Markov Model and instant sensor bias instability values are estimated by Kalman Filter algorithm. Therefore, accelerometer and gyroscope bias instability can be compensated in real time. In conclusion, this thesis study explores how the IMU performance is improved by compensating the deterministic end stochastic errors. The simulation results are supported by a real IMU test data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Richter, Andreas [Verfasser]. „Estimation of radio channel parameters / von Andreas Richter“. Ilmenau : ISLE, 2005. http://d-nb.info/981051421/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Oh, Sang Hyon. „Estimation of Genetic Parameters for Boar Semen Traits“. NCSU, 2003. http://www.lib.ncsu.edu/theses/available/etd-04182003-114352/.

Der volle Inhalt der Quelle
Annotation:
During the last half of the 20th century, the world pork industry has achieved astonishing developments in pig breeding. Now swine farms are larger, ownership more concentrated, and farms have become more industrialized. Artificial insemination (AI) plays an important role in animal breeding increasing utilization of genetically superior sires. Currently boars selected for commercial use as AI sires are evaluated on grow-finish performance and carcass characteristics. The objectives of this study were to (A) estimate genetic correlations between production and semen traits in the boar; average daily gain (ADG), back fat thickness (BF) and muscle depth (MD) as production traits, and total sperm cells (TSC), total concentration (TC), volume collected (SV), number of extended doses (ND), and acceptance rate of ejaculates (AR) as semen traits; (B) to model the variances and covariances of total sperm cells (× 109) over the active lifetime of AI boars; and (C) to compare multiple traits and random regression analyses applied to total sperm cells (TSC). Average heritability estimates were 0.39 for ADG, 0.32 for BF, 0.15 for MD, and repeatability estimates were 0.38 for SV, 0.37 for TSC, 0.09 for TC, 0.39 for ND, and 0.16 for AR. Semen traits showed negative genetic correlations with MD. Genetic correlations would indicate that current selection objectives are having a negative effect on semen traits. Therefore, current AI boar selection practices may be having a detrimental effect on semen production. In random regression analysis for total sperm cells, maximum log likelihood value was observed for sixth, fifth, and seventh order polynomials for fixed, additive genetic and permanent environmental effects, respectively. Best fit as determined by Akaike's Information Criterion was based on a model with sixth, fourth, and seventh order polynomials for fixed, additive genetic and permanent environmental effects, respectively. Best fit as determined by Schwarz Criterion was by fitting fourth, second, and seventh order polynomials for fixed, additive genetic and permanent environmental effects, respectively. Heritability estimates for total sperm cells ranged from 0.27 to 0.61 across age of boar classifications. Heritability for total sperm cells tended to increase with age of boar classification. The cyclic nature of heritability for total sperm cells that was observed over the active lifetime of boars may be due in part to number of observations across seasons limiting our ability to correct for seasonal effects on sperm production. In MTDFREML analysis, heritability estimates of 9, 12, 15, 18, 21, 24, and 27 months of age were, respectively, 0.28, 0.29, 0.26, 0.27, 0.30, 0.79, and 0.41. The results from MTDFREML seemed to be overestimated when compared to random regression. Therefore, random regression methods are the most appropriate to analyze semen traits as they are longitudinal data measured over the boars lifetime.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Li, Hongfei. „Approximate profile likelihood estimation for spatial-dependence parameters“. Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1191267954.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Richter, Andreas Thomä Reiner. „Estimation of radio channel parameters : models and algorithms /“. Ilmenau : ISLE, 2005. http://www.gbv.de/dms/ilmenau/toc/500656835.PDF.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Komorowski, Michal. „Statistical methods for estimation of biochemical kinetic parameters“. Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/2770/.

Der volle Inhalt der Quelle
Annotation:
This thesis consists of four original pieces of work contained in chapters 2,3,4 and 5. These cover four topics within the area of statistical methods for parameter estimation of biochemical kinetic models. Emphasis is put on integrating single-cell reporter gene data with stochastic dynamic models. Chapter 2 introduces a modelling framework based on stochastic and ordinary differential equations that addresses the problem of reconstructing transcription time course profiles and associated degradation rates from fluorescent and luminescent reporter genes. We present three case studies where the methodology is used to reconstruct unobserved transcription profiles and to estimate associated degradation rates. In Chapter 3 we use the linear noise approximation to model biochemical reactions through a stochastic dynamic model and derive an explicit formula for the likelihood function which allows for computationally efficient parameter estimation. The major advantage of the method is that in contrast to the more established diffusion approximation based methods the computationally costly techniques of data augmentation are not necessary. In Chapter 4 we present an inference framework for interpretation of fluorescent reporter gene data. The method takes into account stochastic variability in a fluorescent signal resulting from intrinsic noise of gene expression, extrinsic noise and kinetics of fluorescent protein maturation. Chapter 5 presents a Bayesian hierarchical model, that allows us to infer distributions of fluorescent reporter degradation rates. All methods are embedded in a Bayesian framework and inference is performed using Markov chain Monte Carlo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Sullivan, Thomas M. (Thomas Michael). „Estimation of FM synthesis parameters from sampled sounds“. Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/77695.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Tan, Janice S. (Janice Sen Koon) 1978. „Estimation of cardiovascular parameters from non-invasive measurements“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/89923.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Atallah, Tony A. „A review on dams and breach parameters estimation“. Master's thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/37120.

Der volle Inhalt der Quelle
Annotation:
Nowadays, especially after the appearance of the global warming effects, water is becoming less and less available. Here appears the role of water resources engineering. That is; finding the mean through which we can collect water. One alternative for doing so is the storing of water behind dams. This is why this report will focus on damsâ issues. This report is divided in two sections. The first section deals with the most common types of dams, the forces applied on them, the modes of failure of these structures, the environmental effects on the stream, the decommissioning and other technical matters. The second part focuses on the different methods used in order to estimate or predict the breach of the dams especially for the embankment type. These methods are applied to the case of the Timberlake Dam in Lynchburg, VA that failed in 1995 and was rebuilt in 2000.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Ibrahim, Rania. „Improved Estimation of Transport Parameters in the Dermis“. University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353951749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Mao, Xiaolei. „GPS CARRIER SIGNAL PARAMETERS ESTIMATION UNDER IONOSPHERE SCINTILLATION“. Miami University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=miami1314295002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Prieto-Blanco, Ana. „Satellite estimation of biophysical parameters for ecological models“. Thesis, Swansea University, 2007. https://cronfa.swan.ac.uk/Record/cronfa42698.

Der volle Inhalt der Quelle
Annotation:
Ecological models are central to understanding of hydrological and carbon cycles. These models need input from Earth Observation data to function at regional to global scales. Requirements of these models and the satellite missions designed to fulfill them are reviewed to asses the present situation. The aim is to establish a better informed framework for the design and development of future satellite missions to meet the needs of ecological modellers. Key land surface parameters that can potentially be derived by remote sensing are analysed - leaf area index, leaf chlorophyll content, the fraction of photosynthetically-active radiation absorbed by the canopy and the fractional cover - as well as the aerosol optical thickness. Three coupled models - PROSPECT, FLIGHT and 6S - are used to simulate top of the atmosphere reflectances observed in a number of viewing directions and spectral wavebands within the visible and near-infrared domains. A preliminary study provides a sensitivity analysis of the top of the atmosphere reflectances to the input parameters and to the viewing angles. Finally, a methodology that links ecological model requirements to satellite instrument capabilities is presented. The three coupled models - PROSPECT, FLIGHT and 6S - are inverted using a simple technique based on look-up tables (LUTs). The LUT is used to estimate canopy biophysical variables from remotely-sensed data observed at the top of the atmosphere with different directional and spectral sampling configurations. The retrieval uncertainty is linked with the instrument radiometric accuracy by analysing the impact of different levels of radiometric noise at the input. The parameters retrieved in the inversion are used to drive two land-surface parameterization models, Biome-BGC and JULES. The effects of different configurations and of the radiometric noise on the NPP estimated are analysed. The technique is applied to evaluate desirable sensor characteristics for driving models of boreal forest productivity. The results are discussed in view of the definition of future satellites and the selection of the best measurement configuration for accurate estimation of canopy characteristics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Jeisman, Joseph Ian. „Estimation of the parameters of stochastic differential equations“. Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16205/1/Joseph_Jesiman_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Stochastic di®erential equations (SDEs) are central to much of modern finance theory and have been widely used to model the behaviour of key variables such as the instantaneous short-term interest rate, asset prices, asset returns and their volatility. The explanatory and/or predictive power of these models depends crucially on the particularisation of the model SDE(s) to real data through the choice of values for their parameters. In econometrics, optimal parameter estimates are generally considered to be those that maximise the likelihood of the sample. In the context of the estimation of the parameters of SDEs, however, a closed-form expression for the likelihood function is rarely available and hence exact maximum-likelihood (EML) estimation is usually infeasible. The key research problem examined in this thesis is the development of generic, accurate and computationally feasible estimation procedures based on the ML principle, that can be implemented in the absence of a closed-form expression for the likelihood function. The overall recommendation to come out of the thesis is that an estimation procedure based on the finite-element solution of a reformulation of the Fokker-Planck equation in terms of the transitional cumulative distribution function(CDF) provides the best balance across all of the desired characteristics. The recommended approach involves the use of an interpolation technique proposed in this thesis which greatly reduces the required computational effort.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Jeisman, Joseph Ian. „Estimation of the parameters of stochastic differential equations“. Queensland University of Technology, 2006. http://eprints.qut.edu.au/16205/.

Der volle Inhalt der Quelle
Annotation:
Stochastic di®erential equations (SDEs) are central to much of modern finance theory and have been widely used to model the behaviour of key variables such as the instantaneous short-term interest rate, asset prices, asset returns and their volatility. The explanatory and/or predictive power of these models depends crucially on the particularisation of the model SDE(s) to real data through the choice of values for their parameters. In econometrics, optimal parameter estimates are generally considered to be those that maximise the likelihood of the sample. In the context of the estimation of the parameters of SDEs, however, a closed-form expression for the likelihood function is rarely available and hence exact maximum-likelihood (EML) estimation is usually infeasible. The key research problem examined in this thesis is the development of generic, accurate and computationally feasible estimation procedures based on the ML principle, that can be implemented in the absence of a closed-form expression for the likelihood function. The overall recommendation to come out of the thesis is that an estimation procedure based on the finite-element solution of a reformulation of the Fokker-Planck equation in terms of the transitional cumulative distribution function(CDF) provides the best balance across all of the desired characteristics. The recommended approach involves the use of an interpolation technique proposed in this thesis which greatly reduces the required computational effort.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Warwick, Jane. „Selecting tuning parameters in minimum distance estimators“. n.p, 2001. http://ethos.bl.uk/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lermer, Toby, und University of Lethbridge Faculty of Arts and Science. „A software size estimation tool: Hellerman's complexity measure“. Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 1995, 1995. http://hdl.handle.net/10133/353.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Granefelt, Håkan. „Applicability of ISO standards & noise estimation tool“. Thesis, KTH, MWL Marcus Wallenberg Laboratoriet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-203817.

Der volle Inhalt der Quelle
Annotation:
This study has two major purposes: (1) To determine the applicability of ISO standards for calculating sound power levels emitted by acoustic noise sources in-house at Ericsson, Kista. (2) To design a software tool for estimation of emitted sound power levels from a radio installation. To determine the applicability of different ISO standards during the design phase at Ericsson, viable standards were chosen and measurements were taken on the exact same noise source following each chosen standard. The evaluation of these measurements was made with different points in focus.  Accuracy  Availability of instrumentation and measurement environments  Time consumption  Complexity of measurement conduction. The ISO standard that showed most promise was the ISO 3744. This standard uses a method with fairly high accuracy and is quite easy to implement compared to other standards tested. All instrumentation required to follow this standard, except for a calibrated microphone calibrator, is available at Ericsson, Kista. The standards ISO 3747 and ISO 9614-2 might also be of interest, during the design phase, if the equipment necessary is acquired. The acoustic noise estimation tool was written in the Microsoft Excel embedded programming language, Visual Basic for Application version 7.1. With the tool it is possible to specify the layout of a radio installation for mobile traffic with different mounted radios and estimate both the sound power level emitted from all these radio units and the sound pressure level at a distance from the radio installation. It is also possible to define new radio units in the tool and save them for later use. The tool was verified by acoustic measurements taken on a test setup at MWL, KTH.
Denna studie har två huvudsyften: (1) Att bestämma tillämpligheten av ISO-standarder för beräkning av ljudeffektnivåer som avges av bullerkällor internt på Ericsson, Kista. (2) Att utforma ett mjukvaruverktyg för uppskattning av utsända ljudeffektnivåer från en radioinstallation. För att bestämma tillämpligheten av olika ISO-standarder under designfasen på Ericsson, valdes genomförbara standarder ut och mätningar gjordes på en och samma bullerkälla enligt varje separat standard. Utvärderingen av dessa mätningar gjordes med olika punkter i fokus. • Noggrannhet • Tillgång till utrustning och mätmiljöer • Tidsåtgång • Mätmetodens komplexitet Den ISO-standard som visade sig mest lovande var ISO 3744. Denna standard använder en metod med relativt hög noggrannhet och är ganska lätt att genomföra jämfört med andra standarder som testats. Alla instrument som krävs för att följa denna standard, med undantag för en kalibrerad mikrofon kalibrator, finns på Ericsson, Kista. Standarderna ISO 3747 och ISO 9614-2 kan också vara av intresse om den utrustning som behövs, för att följa standarderna införskaffas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Li, Chihui. „Development of a software tool for reliability estimation“. Morgantown, W. Va. : [West Virginia University Libraries], 2009. http://hdl.handle.net/10450/10451.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--West Virginia University, 2009.
Title from document title page. Document formatted into pages; contains xi, 138 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 136-138).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Xue, Huitian, und 薛惠天. „Maximum likelihood estimation of parameters with constraints in normaland multinomial distributions“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47850012.

Der volle Inhalt der Quelle
Annotation:
Motivated by problems in medicine, biology, engineering and economics, con- strained parameter problems arise in a wide variety of applications. Among them the application to the dose-response of a certain drug in development has attracted much interest. To investigate such a relationship, we often need to conduct a dose- response experiment with multiple groups associated with multiple dose levels of the drug. The dose-response relationship can be modeled by a shape-restricted normal regression. We develop an iterative two-step ascent algorithm to estimate normal means and variances subject to simultaneous constraints. Each iteration consists of two parts: an expectation{maximization (EM) algorithm that is utilized in Step 1 to compute the maximum likelihood estimates (MLEs) of the restricted means when variances are given, and a newly developed restricted De Pierro algorithm that is used in Step 2 to find the MLEs of the restricted variances when means are given. These constraints include the simple order, tree order, umbrella order, and so on. A bootstrap approach is provided to calculate standard errors of the restricted MLEs. Applications to the analysis of two real datasets on radioim-munological assay of cortisol and bioassay of peptides are presented to illustrate the proposed methods. Liu (2000) discussed the maximum likelihood estimation and Bayesian estimation in a multinomial model with simplex constraints by formulating this constrained parameter problem into an unconstrained parameter problem in the framework of missing data. To utilize the EM and data augmentation (DA) algorithms, he introduced latent variables {Zil;Yil} (to be defined later). However, the proposed DA algorithm in his paper did not provide the necessary individual conditional distributions of Yil given (the observed data and) the updated parameter estimates. Indeed, the EM algorithm developed in his paper is based on the assumption that{ Yil} are fixed given values. Fortunately, the EM algorithm is invariant under any choice of the value of Yil, so the final result is always correct. We have derived the aforesaid conditional distributions and hence provide a valid DA algorithm. A real data set is used for illustration.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie