Tesi sul tema "Modeling and parametric calibration"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Modeling and parametric calibration.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Modeling and parametric calibration".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Wan, Shuang. "Parametric array calibration". Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/4902.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The subject of this thesis is the development of parametric methods for the calibration of array shape errors. Two physical scenarios are considered, the online calibration (self-calibration) using far-field sources and the offline calibration using near-field sources. The maximum likelihood (ML) estimators are employed to estimate the errors. However, the well-known computational complexity in objective function optimization for the ML estimators demands effective and efficient optimization algorithms. A novel space-alternating generalized expectation-maximization (SAGE)-based algorithm is developed to optimize the objective function of the conditional maximum likelihood (CML) estimator for the far-field online calibration. Through data augmentation, joint direction of arrival (DOA) estimation and array calibration can be carried out by a computationally simple search procedure. Numerical experiments show that the proposed method outperforms the existing method for closely located signal sources and is robust to large shape errors. In addition, the accuracy of the proposed procedure attains the Cram´er-Rao bound (CRB). A global optimization algorithm, particle swarm optimization (PSO) is employed to optimize the objective function of the unconditional maximum likelihood (UML) estimator for the farfield online calibration and the near-field offline calibration. A new technique, decaying diagonal loading (DDL) is proposed to enhance the performance of PSO at high signal-to-noise ratio (SNR) by dynamically lowering it, based on the counter-intuitive observation that the global optimum of the UML objective function is more prominent at lower SNR. Numerical simulations demonstrate that the UML estimator optimized by PSO with DDL is optimally accurate, robust to large shape errors, and free of the initialization problem. In addition, the DDL technique is applicable to a wide range of array processing problems where the UML estimator is employed and can be coupled with different global optimization algorithms.
2

Osborne, Christine. "Non-parametric calibration". Thesis, University of Bath, 1990. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293248.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Wenger, Jonathan. "Non-Parametric Calibration for Classification". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262652.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many applications for classification methods not only require high accuracy but also reliable estimation of predictive uncertainty. This is of particular importance in fields such as computer vision or robotics, where safety-critical decisions are made based on classification outcomes. However, while many current classification frameworks, in particular deep neural network architectures, provide very good results in terms of accuracy, they tend to incorrectly estimate their predictive uncertainty. In this thesis we focus on probability calibration, the notion that a classifier’s confidence in a prediction matches the empirical accuracy of that prediction. We study calibration from a theoretical perspective and connect it to over- and underconfidence, two concepts first introduced in the context of active learning. The main contribution of this work is a novel algorithm for classifier calibration. We propose a non-parametric calibration method which is, in contrast to existing approaches, based on a latent Gaussian process and specifically designed for multiclass classification. It allows for the incorporation of prior knowledge, can be applied to any classification method that outputs confidence estimates and is not limited to neural networks. We demonstrate the universally strong performance of our method across different classifiers and benchmark data sets from computer vision in comparison to existing classifier calibration techniques. Finally, we empirically evaluate the effects of calibration on querying efficiency in active learning.
Många applikationer för klassificeringsmetoder kräver inte bara hög noggrannhet utan även tillförlitlig uppskattning av osäkerheten av beräknat utfall. Detta är av särskild betydelse inom områden som datorseende eller robotik, där säkerhetskritiska beslut fattas utifrån klassificeringsresultat. Medan många av de nuvarande klassificeringsverktygen, i synnerhet djupa neurala nätverksarkitekturer, ger resultat när det gäller noggrannhet, tenderar de att felaktigt uppskatta strukturens osäkerhet. I detta examensarbete fokuserar vi på sannolikhetskalibrering, d.v.s. hur väl en klassificerares förtroende för ett resultat stämmer överens med den faktiska empiriska säkerheten. Vi studerar kalibrering ur ett teoretiskt perspektiv och kopplar det till över- och underförtroende, två begrepp som introducerades första gången i samband med aktivt lärande. Huvuddelen av arbetet är framtagandet av en ny algoritm för klassificeringskalibrering. Vi föreslår en icke-parametrisk kalibreringsmetod som, till skillnad från befintliga tillvägagångssätt, bygger på en latent Gaussisk process och som är specielltutformad för klassificering av flera klasser. Algoritmen är inte begränsad till neurala nätverk utan kan tillämpas på alla klassificeringsmetoder som ger konfidensberäkningar. Vi demonstrerar vår metods allmänt starka prestanda över olika klassifikatorer och kända datamängder från datorseende i motsats till befintliga klassificeringskalibreringstekniker. Slutligen utvärderas effektiviteten av kalibreringen vid aktivt lärande.
4

Tachet, des combes Rémi. "Non-parametric model calibration in finance". Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00658766.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Consistently fitting vanilla option surfaces is an important issue when it comes to modelling in finance. In three different models: local and stochastic volatility, local correlation and hybrid local volatility with stochstic rates, this calibration boils down to the resolution of a nonlinear partial integro-differential equation. In a first part, we give existence results of solutions for the calibration equation. They are based upon fixed point methods in Hölder spaces and short-time a priori estimates. We then apply those existence results to the three models previously mentioned and give the calibration obtained when solving the pde numerically. At last, we focus on the algorithm used for the resolution: an ADI predictor/corrector scheme that needs to be modified to take into account the nonlinear term. We also study an instability phenomenon that occurs in certain cases for the local and stochastic volatility model. Using Hadamard's theory, we try to offer a theoretical explanation to the instability
5

Xiang, Yi. "Implied volatility smirk and non-parametric calibration /". View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?MATH%202004%20XIANG.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 107-114). Also available in electronic version. Access restricted to campus users.
6

COELHO, LUIZ CRISTOVAO GOMES. "SHELL MODELING WITH PARAMETRIC INTERSECTION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1998. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2780@1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Apresenta-se uma metodologia para modelagem de cascas para elementos finitos definidas em superfícies paramétricas. A metodologia consiste na criação de curvas e geração de malhas sobre os retalhos paramétricos constru´ıdos com base nestas curvas, que também são usadas para a conexão de malhas adjacentes. O modelo final é uma representação de todas as malhas combinadas em uma única estrutura de dados. As ferramentas básicas para geração de tais malhas são uma interface para modelagem de curvas espaciais e os algoritmos geom´etricos para construcão de mapeamentos nos domínios elementares. O problema central em modelagens compostas é o tratamento dado às malhas em superfícies que se interceptam. Um algoritmo capaz de modelar com precisão as curvas de interseção e de ajustar as duas malhas para as novas restrições geradas é apresentado neste trabalho. O algoritmo é parte de um programa completo para modelagem interativa de cascas, que tem sido usado no projeto de grandes sistemas flutuantes para explotação de petróleo em águas profundas. O uso de uma variante da estrutura de dados DCEL, que usa árvores de ordenação espacial para armazenar as entidades topol´ogicas ao invés de listas ou vetores, permite que malhas bastante refinadas sejam reconstru´ıdas em tempo compatível com o trabalho interativo. Estas árvores aceleram os cálculos de interseção necessários à determinação dos pontos de interpolação das curvas de trimming, permitindo tamb´em a reconstrução das malhas usando-se apenas consultas locais.
We present a methodology for modeling finite-element meshes defined on parametric surface patches. The idea is to build curves and generate meshes over the parametric patches built with these curves, which also connect adjacent meshes. The final model is a representation of all meshes combined into a single data structure. The basic tools to generate such meshes are the user interface to model space curves and the geometric algorithms to construct the elementary domain mappings. The main problem in composite modeling is how to handle mesh surfaces that intersect each other. We present an algorithm that models the intersection curves precisely and adjusts both meshes to the newly formed borders. The algorithm is part of an interactive shell modeling program, which has been used in the design of large offshore oil structures. We avoid unacceptable interaction delays by using a variant of the DCEL data structure that stores topological entities in spatial indexing trees instead of linked lists. These trees speed up the intersection computations required to determine points of the trimming curves, and also allows mesh reconstruction using only local queries.
7

Hoare, Armando. "Parametric, non-parametric and statistical modeling of stony coral reef data". [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002470.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Basso, Filippo. "A non-parametric Calibration Algorithm for Depth Sensors Exploiting RGB Cameras". Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424206.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Range sensors are common devices on modern robotic platforms. They endow the robot with information about distance and shape of the objects in the sensors field of view. In particular, the advent in the last few years of consumer RGB- D sensors such as the Microsoft Kinect, has greatly fostered the development of depth-based algorithms for robotics. In fact, such sensors can provide a large quantity of data at a relatively low price. In this thesis three different calibration problems for depth sensors are tackled. The first original contribution to the state of the art is an algorithm to recover the axis of rotation of a 2D laser range finder (LRF) mounted on a rotating support. The key difference with other approaches is the use of kinematics point-plane constraints to estimate the pose of the LRF with respect to a static camera, and screw decomposition to recover the axis of rotation. The correct reconstruction of a small indoor environment after calibration validates the proposed algorithm. The second and most important original contribution of the thesis is a fully automatic two-steps calibration algorithm for structured-light depth sensors (e.g. Kinect). The key novelty of this work is the separation of the depth error into two components, corrected with functions estimated on a pixel-basis. This separation, validated by experimental observations, allows to dramatically reduce the number of parameters in the final non-linear minimization and, consequently, the time for the solution to converge to the global minimum. The depth images of a test set corrected using the obtained calibration parameters are analyzed and compared to the ground truth. The comparison shows that they differ from the real ones just for an unpredictable noise. A qualitative analysis of the fusion between depth and RGB data further confirms the effectiveness of the approach. Moreover, a ROS package for both calibrating and correcting the Kinect data has been released as open source. The third contribution reported in the thesis is a new distributed calibration algorithm for networks composed by cameras and already-calibrated depth sensors. A ROS package implementing the proposed approach has been developed and is available for free as a part of a big open source project for people tracking: OpenPTrack. The developed package is able to calibrate networks composed by a dozen sensors in real-time (i.e., batch processing is not needed), exploiting plane- to-plane constraints and non-linear least squares optimization.
I sensori di profondità sono dispositivi comuni sui robot moderni. Essi forniscono al robot informazioni sulla distanza e sulla forma degli oggetti nel loro campo di visione, permettendogli di agire di conseguenza. In particolare, l’arrivo negli ultimi anni di sensori RGB-D di consumo come Microsoft Kinect, ha favorito lo sviluppo di algoritmi per la robotica basati su dati di profondità. Di fatto, questi sensori sono in grado di generare una grande quantità di dati ad un prezzo relativamente basso. In questa tesi vengono affrontati tre diversi problemi riguardanti la calibrazione di sensori di profondità. Il primo contributo originale allo stato dell’arte è un algoritmo per stimare l’asse di rotazione di un laser range finder (LRF) 2D montato su un supporto rotante. La differenza chiave con gli altri approcci è l’utilizzo di vincoli punto-piano derivanti dalla cinematica per stimare la posizione del LRF rispetto ad una videocamera fissa, e l’uso di una screw decomposition per stimare l’asse di rotazione. La corretta ricostruzione di una stanza dopo la calibrazione valida l’algoritmo proposto. Il secondo e più importante contributo originale di questa tesi è un algoritmo completamente automatico per la calibrazione di sensori di profondità a luce strut- turata (ad esempio Kinect). La chiave di questo lavoro è la separazione dell’errore di profondità in due componenti, entrambe corrette pixel a pixel. Questa separa- zione, validata da osservazioni sperimentali, permette di ridurre sensibilmente il numero di parametri nell’ottimizzazione finale e, di conseguenza, il tempo neces- sario affinché la soluzione converga al minimo globale. Il confronto tra le immagini di profondità di un test set, corrette con i parametri di calibrazione ottenuti, e quelle attese, dimostra che la differenza tra le due è solamente di una quantità ca- suale. Un’analisi qualitativa della fusione tra dati di profondità e RGB conferma ulteriormente l’efficacia dell’approccio. Inoltre, un pacchetto ROS per calibrare e correggere i dati generati da Kinect è disponibile open source. Il terzo contributo riportato nella tesi è un nuovo algoritmo distribuito per la calibrazione di reti composte da videocamere e sensori di profondità già calibrati. Un pacchetto ROS che implementa l’algoritmo proposto è stato rilasciato come parte di un grande progetto open source per il tracking di persone: OpenPTrack. Il pacchetto sviluppato è in grado di calibrare reti composte da una decina di sensori in tempo reale (non è necessario processare i dati in un secondo tempo), sfruttando vincoli piano-piano e un’ottimizzazione non lineare.
9

Holden, Christian. "Modeling and Control of Parametric Roll Resonance". Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-12736.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Parametric roll resonance is a dangerous resonance phenomenon affecting several types of ships, such as destroyers, RO-RO paxes, cruise ships, fishing vessels and especially container ships. Worst case, parametric roll is capable of causing roll angles of at least 50 degrees, and damage in the tens of millions of US dollars. Empirical and mathematical investigations have concluded that parametric roll occurs due to periodic changes in the waterplane area of the ship. If the vessel is sailing in longitudinal seas, with waves of approximately the same length as the ship, and encounter frequency of about twice the natural roll frequency, then parametric resonance can occur. While there is a significant amount of literature on the hydrodynamics of parametric roll, there is less on controlling and stopping the phenomenon through active control. The main goal of this thesis has been to develop controllers capable of stopping parametric roll. Two main results on control are presented. To derive, analyze and simulate the controllers, it proved necessary to develop novel models. The thesis thus contains four major contributions on modeling. The main results are (presented in order of appearance in the thesis): Six-DOF computer model for parametric roll One-DOF model of parametric roll for non-constant velocity Three-DOF model of parametric roll Seven-DOF model for ships with u-tanks of arbitrary shape Frequency detuning controller Active u-tank based controller for parametric roll
10

Qvarngård, Daniel. "Modeling Optical Parametric Generation in Inhomogeneous Media". Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74256.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Brakna, Mohammed. "Sensor and actuator optimal location for dynamic controller design. Application to active vibration reduction in a galvanizing process". Electronic Thesis or Diss., Université de Lorraine, 2023. https://docnum.univ-lorraine.fr/ulprive/DDOC_T_2023_0152_BRAKNA.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les objectifs de cette thèse sont de déterminer un modèle à la fois suffisamment précis mais numériquement exploitable pour proposer des méthodologies de placement de capteurs et d'actionneurs pour le contrôle actif de vibration dans une ligne de galvanisation. La galvanisation consiste à recouvrir un métal (dans notre étude : de l'acier) par une couche protectrice de zinc qui évite la corrosion due à l'air. L'épaisseur de cette couche doit être constante pour garantir les propriétés mécaniques et l'état de surface du produit. Dans une ligne de galvanisation, la bande d'acier en mouvement est chauffée puis plongée dans un bain de zinc liquide avant d'être essorée par des buses projetant de l'air. L'air pulsé, ainsi que la rotation des cylindres d'entrainement de la bande - entre autres - créent des vibrations qui viennent perturber l'essorage et donc la régularité du dépôt de zinc. Un contrôle actif est donc nécessaire, par exemple au moyen d'électro-aimants placés de part et d'autre de la bande d'acier en mouvement. Dans un premier temps, un modèle de comportement de la bande d'acier dans la ligne de galvanisation prenant en compte la présence et la propagation des vibrations a été obtenu par discrétisation spatiale d'une équation aux dérivées partielles. Ce modèle de type espace d'état a été validé en simulation et expérimentalement sur une ligne de galvanisation pilote d'ArcelorMittal Research à Maizières-lès-Metz. Une fois ce modèle établi, l'objectif de l'étude est la recherche du placement optimal de capteurs, pour mesurer le plus efficacement les vibrations de la bande, mais également d'actionneurs pour minimiser l'amplitude de ces vibrations par une loi de commande adaptée. Ces problèmes de placements optimaux sont au cœur des thématiques de contrôle actif des vibrations et se retrouvent dans de nombreux domaines d'application. Une méthode de placement basée sur la maximisation des Grammiens a été proposée en vue de réduire l'impact des perturbations sur le système. Différentes stratégies de contrôle ont été envisagées telles que le retour d'état observé et le retour d'état étendu observé pour améliorer les résultats en tenant compte de l'estimation des perturbations par un observateur PI (proportionnel-intégral). Des résultats de simulations et expérimentaux illustrent les résultats obtenus
The aims of the present PhD thesis are to determine a model that is both sufficiently accurate and numerically exploitable to propose optimal placement of sensors and actuators for active vibration control in a galvanizing line. A continuous hot-dip galvanizing process consists in covering a metal (here: a steel band) by a protective layer of zinc which avoids the corrosion due to the air. The thickness of this layer must be constant to guarantee the mechanical properties and surface condition of the product. In a galvanizing line, the moving steel strip is heated and then immersed in a liquid zinc bath before being wiped out by nozzles projecting air. The air flow, as well as the rotation of the driving rolls, among other things, creates vibrations affecting the wiping process and thus the regularity of the zinc deposit. Active control is therefore necessary, for example by means of electromagnets placed on either side of the moving steel strip. In a first step, a behavioral model of the steel strip taking into account the presence and propagation of vibrations was obtained by spatial discretization of a partial differential equation. This state space model was validated in simulation and experimentally on a pilot galvanizing line of ArcelorMittal Research in Maizières-lès-Metz. Once this model is established, the objective of the study is to find the optimal placement of sensors, to measure the vibrations of the strip as efficiently as possible, but also of actuators to minimize the amplitude of these vibrations by an appropriate control law. These problems of optimal placement are at the heart of the issues of active vibration control and are found in many fields of application. An optimal placement method based on Gramian maximization has been proposed in order to reduce the impact of disturbances on the system. Different control strategies have been considered such as (i) observed state feedback based on Kalman filter and LQ regulator; and (ii) extended observed state feedback to improve the results by also taking into account the disturbance estimation provided by a PI (proportional-integral) observer. Simulation and experimental results illustrate the thesis contributions
12

Han, Dong-Hoon. "Built-In Self Test and Calibration of RF Systems for Parametric Failures". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis proposes a multifaceted production test and post-silicon yield enhancement framework for RF systems. The three main components of the proposed framework are the design, production test, and post-test phase of the overall integrated circuit (IC) development cycle. First, a circuit-sizing method is presented for incorporating test considerations into algorithms for automatic circuit synthesis/device resizing. The sizing problem is solved by using a cost metric that can be incorporated at minimal computational cost into existing optimization tools for manufacturing yield enhancement. Along with the circuit-sizing method introduced in the design phase, a low-cost test and diagnosis method is presented for multi-parametric faults in wireless systems. This test and diagnosis method allows accurate prediction of the end-to-end specifications as well as for the specifications of all the embedded modules. The procedure is based on application of optimized test stimulus and the use of a simple diode-based envelope detector to extract the transient test response envelope at RF signal nodes. This eliminates the need to make RF measurements using expensive standard testers. To further improve the parametric yield of RF circuits, a performance drift-aware adaptation scheme is proposed that automatically compensates for the loss of circuit performance in the presence of process variations. This work includes a diagnosis algorithm to identify faulty circuits within the system and a compensation process that adjusts tunable components to reduce the effects of performance variations. As a result, all the mentioned components contribute to producing a low-cost production test and to enhancing post-silicon parametric yield.
13

De, Sanctis Giovanni. "In-system parametric calibration for two-microphone wave separation in acoustic waveguides". Thesis, Queen's University Belfast, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.579708.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The separation of waves in acoustic waveguides can be used to infer information from various acoustic systems. When applied to musical wind instruments, it allows novel techniques for in-depth investigation of the physics of the instrument, including the extraction of playing information in wind instruments, bore reconstruction and impedance measurement. Although a few approaches to this problem can be found in the literature, little attention has been given to the calibration of the system and above all to the assessment of its performance. The proposed methodology is based on a frequency-domain optimisation of a small set of parameters that best describe the system. The nonlinear optimisation problem that arises is formulated in order to exploit the considerable amount of a-priori knowledge given by the theory of propagation in ducts. The resulting cost surface is typically characterised by several local minima; however, the initial guess given by the nominal values for the parameters ensures the convergence to the desired solution. The main feature of the optimisation approach is that it can be applied in system and does not require any special calibration apparatus. As a consequence, it is also possible to track slow variations in the propagation due to changes in temperature and humidity of the medium during normal operation. A procedure for the assessment of a wave separation algorithm is also proposed; this shows that, under the same conditions, the optimisation approach improves the performance of the system. Finally, the proposed frequency-domain optimisation is also suitable for other applications such as in-air direction of arrival estimation. Preliminary results on these applications are also presented.
14

Antonatos, Alexandros. "Parametric FE-modeling of High-speed Craft Structures". Thesis, KTH, Marina system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-119698.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The primary aim of the thesis was to investigate aluminum as building material for high speed craft, study the hull structure design processes of aluminum high speed craft and develop a parametric model to reduce the modeling time during nite element analysis. An additional aim of the thesis was to study the degree of validity of the idealizations and the assumptions of the semi-empirical design methods by using the parametric model. For the aluminum survey, a large amount of scientic papers and books related to the application of aluminum in shipbuilding industry were re-viewed while for the investigation of hull structure design, several designs of similar craft as well as all the classication rules for high speed craft were examined. The parametric model was developed on Abaqus nite ele-ment analysis software with the help of Python programming language. The study of the idealizations and the assumptions of the semi-empirical design methods was performed on a model derived by the parametric model with scanltings determined by the high speed craft classication rules of ABS. The review on aluminum showed that only specic alloys can be applied on marine applications. It also showed that the eect of reduced mechanical properties due to welding could be decreased by introducing new welding and manufacturing techniques. The study regarding the hull structure de-sign processes indicated that high speed craft are still designed according to semi-empirical classication rules but it also showed that there is ten- dency of transiting on direct calculation methods. The developed paramet-ric model does decrease the modeling time since it is capable of modeling numerous structural arrangements. The analysis related to the idealizations and the assumptions of the semi-empirical design methods revealed that the structural hierarchy idealization and the method of dening boundary by handbook type formulas are applicable for the particular structure while the interaction eect among the structural members is only possible to be studied by detailed modeling techniques.
15

Miller, Robert. "Approaches to the parametric modeling of hormone concentrations". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-118882.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Transdisciplinary research in general, and stress research in particular, requires an efficient integration of methodological knowledge of all involved academic disciplines, in order to obtain conclusions of incremental value about the investigated constructs. From a psychologist’s point of view, biochemistry and quantitative neuroendocrinology are of particular importance for the investigation of endocrine stress systems (i.e., the HPA axis, and the SNS). Despite of their fundamental role for the adequate assessment of endocrine activity, both topics are rarely covered by conventional psychological curriculae. Consequently, the transfer of the respective knowledge has to rely on other, less efficient channels of scientific exchange. The present thesis sets out to contribute to this exchange, by highlighting methodological issues that are repeatedly encountered in research on stress-related endocrine activity, and providing solutions to these issues. As outlined within this thesis, modern stress research tends to fall short of an adequate quantification of the kinetics and dynamics of bioactive cortisol. Cortisol has gained considerable popularity during the last decades, as its bioactive fraction is supposed to be reliably determinable from saliva and is therefore the most conveniently obtainable marker of HPA activity. However, a substantial fraction of salivary cortisol is metabolized to its inactivated form cortisone by the enzyme 11β-HSD2 in the parotid glands, which is likely to restrict its utility. Although the commonly used antibody-based quantification methods (i.e. immunoassays) might “involuntarily” qualify this issue to some degree (due to their inherent cross-reactivity with matrix components that are structurally-related to cortisol; e.g., cortisone), they also cause differential within-immunoassay measurement bias: Salivary cortisone has (as compared to salivary cortisol) a substantially longer half-life, which leads to an overestimation of cortisol levels the more time has passed since the onset of the prior HPA secretory episode, and thus tends to distort any inference on the kinetics of bioactive cortisol. Furthermore, absolute cortisol levels also depend on the between-immunoassay variation of antibodies. Consequently, raw signal comparisons between laboratories and studies, which are favorable as compared to effect comparisons, can hardly be performed. This finding also highlights the need for the long-sought standardization of biochemical measurement procedures. The presumably only way to circumvent both issues is to rely on quantification of ultrafiltrated blood cortisol by mass-spectrometric methods. Being partly related to biochemical considerations with research on HPA activity, a second topic arises concerning the operationalization of the construct itself: In contrast to the simple outcome measures like averaged reaction times, inclined stress researchers can only indirectly infer on the sub-processes being involved in HPA activity from longitudinally sampled hormone concentrations. HPA activity can be quantified either by (a) discrete-time, or by (b) continuous-time models. Although the former is the most popular and more convenient approach (as indicated by the overly frequent encounter of ANOVAs and trapezoidal AUC calculations in the field of psychobiological stress research), most discrete time models form rather data-driven, descriptive approaches to quantify HPA activity, that assume the existence of some endocrine resting-state (i.e., a baseline) at the first sampling point and disregard any mechanistic hormonal change occurring in between all following sampling points. Even if one ignores the fact, that such properties are unlikely to pertain to endocrine systems in general, many generic discrete time models fail to account for the specific structure of endocrine data that results from biochemical hormone measurement, as well as from the dynamics of the investigated system. More precisely speaking, cortisol time series violate homoscedasticity, residual normality, and sphericity, which need to be present in order to enable (mixed effects) GLM-based analyses. Neglecting these prerequisites may lead to inference bias unless counter-measures are taken. Such counter-measures usually involve alteration of the scale of hormone concentrations via transformation techniques. As such, a fourth-root transformation of salivary cortisol (being determined by a widely used, commercially available immunoassay) is shown to yield the optimal tradeoff for generating homoscedasticity and residual normality simultaneously. Although the violation of sphericity could be partly accounted for by several correction techniques, many modern software packages for structural equation modeling (e.g., Mplus, OpenMX, Lavaan) also offer the opportunity to easily specify more appropriate moment structures via path notation and therefore to relax the modeling assumptions of GLM approaches to the analysis of longitudinal hormone data. Proceeding from this reasoning, this thesis illustrates how one can additionally incorporate hypotheses about HPA functioning, and thus model all relevant sub-processes that give rise to HPA kinetics and dynamics. The ALT modeling framework being advocated within this thesis, is shown to serve well for this purpose: ALT modeling can recover HPA activity parameters, which are directly interpretable within a physiological framework, that is, distinct growth factors representing the amount of secreted cortisol and velocity of cortisol elimination can serve to interpret HPA reactivity and regulation in a more unambiguous way, as compared to GLM effect measures. For illustration of these advantages on a content level, cortisol elimination after stress induction was found to be elevated as compared to its known pharmacokinetics. While the mechanism behind this effect requires further investigation, its detection would obviously have been more difficult upon application of conventional GLM methods. Further extension of the ALT framework allowed to address a methodological question, which had previously been dealt with by a mere rule of thumb; what’s the optimal threshold criterion, that enables a convenient but comparably accurate classification of individuals whose HPA axis is or is not activated upon encountering a stressful situation? While a rather arbitrarily chosen baseline-to-peak threshold of 2.5 nmol/L was commonly used to identify episodes of secretory HPA activity in time series of salivary cortisol concentrations, a reanalysis of a TSST meta- dataset by means of ALT mixture modeling suggested that this 2.5 nmol/L criterion is overly conservative with modern biochemical measurement tools and should be lowered according to the precision of the utilized assay (i.e., 1.5 nmol/L). In sum, parametric ALT modeling of endocrine activity can provide a convenient alternative to the commonly utilized GLM-based approaches that enables the inference on and quantification of distinct HPA components on a theoretical foundation, and thus to bridge the gap between discrete- and continuous-time modeling frameworks. The implementation of the outlined modeling approaches by the respective statistical syntaxes and practical guidelines being derived from the comparison of cortisol assays mentioned above, are provided in the appendix of the present thesis, which will hopefully help stress researchers to directly quantify the construct they actually intend to assess.
16

Carrière, Rob. "High resolution parametric modeling of canonical radar scatterers /". The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487841548270186.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Hall, Jeremy T. "Forecasting Marine Corps enlisted attrition through parametric modeling". Thesis, Monterey, Calif. : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/March/09Mar%5FHall_Jeremy.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.S. in )--Naval Postgraduate School, March 2009.
Thesis Advisor(s): Buttrey, Samuel E. "March 2009." Description based on title screen as viewed on April 23, 2009. Author(s) subject terms: Forecasting, attrition, Marine Corps NEAS losses, Gompertz Model, survival analysis. Includes bibliographical references (p. 67). Also available in print.
18

Malladi, Sailaja. "Parametric modeling and analysis of structural bonded joints". Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=80.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.S.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains x, 56 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-53).
19

Medawar, Samer. "Pipeline Analog-Digital Converters Dynamic Error Modeling for Calibration : Integral Nonlinearity Modeling, Pipeline ADC Calibration, Wireless Channel K-Factor Estimation". Doctoral thesis, KTH, Signalbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-95507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis deals with the characterization, modeling and calibration of pipeline analog-digital converters (ADC)s. The integral nonlinearity (INL) is characterized, modeled and the model is used to design a post-correction block in order to compensate the imperfections of the ADC. The INL model is divided into: a dynamic term designed by the low code frequency (LCF) component depending on the output code k and the frequency under test m, and a static term known as high code frequency (HCF) component depending solely on the output code k. The HCF is related to the pipeline ADC circuitry. A set of adjacent piecewise linear segments is used to model the HCF. The LCF is the dynamic term depending on the input signal characteristics, and is modeled using a polynomial with frequency dependent coefficients. Two dynamic calibration methodologies are developed to compensate the imperfections of the pipeline ADC. In the first approach, the INL model at hand is transformed into a post-correction scheme. Regarding the HCF model, a set of gains and offsets is used to reconstruct the HCF segments structure. The LCF polynomial frequency dependent coefficients are used to design a bank of FIR filters which reconstructs the LCF model. A calibration block made by the combination of static gains/offsets and a bank of FIR filters is built to create the correction term to calibrate the ADC. In the second approach, the calibration (and modeling) process is extended to the upper Nyquist bands of the ADC. The HCF is used directly in calibration as a look-up-table (LUT). The LCF part is still represented by a frequency dependent polynomial of which the coefficients are used to develop a filter bank, implemented in the frequency domain with an overlap-and-add structure. In brief the calibration process is done by the combination of a static LUT and a bank of frequency domain filters. The maximum likelihood (ML) method is used to estimate the K-factor of a wireless Ricean channel. The K-factor is one of the main characteristics of a telecommunication channel. However, a closed-form ML estimator of the Kfactor is unfeasible due to the complexity of the Ricean pdf. In order to overcome this limitation, an approximation (for high K-factor values) is induced to the Ricean pdf. A closed-form approximate ML (AML) for the Ricean K-factor is computed. A bias study is performed on the AML and the bias derived value is used to improve the AML estimation, leading to a closed-form bias compensated estimator (BCE). The BCE performance (in terms of variance, bias and mean square error (MSE)) is simulated and compared to the best known closed-form moment-based estimator found in the literature. The BCE turns to have a superior performance for low number of samples and/or high K-factor values. Finally, the BCE is applied on real site wireless channel measurements in an urban macro cell area, using a 4-antenna transmit/receive MIMO system.
QC 20120528
20

Singh, Meghendra. "Human Behavior Modeling and Calibration in Epidemic Simulations". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/87050.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Human behavior plays an important role in infectious disease epidemics. The choice of preventive actions taken by individuals can completely change the epidemic outcome. Computational epidemiologists usually employ large-scale agent-based simulations of human populations to study disease outbreaks and assess intervention strategies. Such simulations rarely take into account the decision-making process of human beings when it comes to preventive behaviors. Absence of realistic agent behavior can undermine the reliability of insights generated by such simulations and might make them ill-suited for informing public health policies. In this thesis, we address this problem by developing a methodology to create and calibrate an agent decision-making model for a large multi-agent simulation, in a data driven way. Our method optimizes a cost vector associated with the various behaviors to match the behavior distributions observed in a detailed survey of human behaviors during influenza outbreaks. Our approach is a data-driven way of incorporating decision making for agents in large-scale epidemic simulations.
Master of Science
In the real world, individuals can decide to adopt certain behaviors that reduce their chances of contracting a disease. For example, using hand sanitizers can reduce an individual‘s chances of getting infected by influenza. These behavioral decisions, when taken by many individuals in the population, can completely change the course of the disease. Such behavioral decision-making is generally not considered during in-silico simulations of infectious diseases. In this thesis, we address this problem by developing a methodology to create and calibrate a decision making model that can be used by agents (i.e., synthetic representations of humans in simulations) in a data driven way. Our method also finds a cost associated with such behaviors and matches the distribution of behavior observed in the real world with that observed in a survey. Our approach is a data-driven way of incorporating decision making for agents in large-scale epidemic simulations.
21

Zhang, Chenxi. "Depth-Assisted Semantic Segmentation, Image Enhancement and Parametric Modeling". UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/27.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This dissertation addresses the problem of employing 3D depth information on solving a number of traditional challenging computer vision/graphics problems. Humans have the abilities of perceiving the depth information in 3D world, which enable humans to reconstruct layouts, recognize objects and understand the geometric space and semantic meanings of the visual world. Therefore it is significant to explore how the 3D depth information can be utilized by computer vision systems to mimic such abilities of humans. This dissertation aims at employing 3D depth information to solve vision/graphics problems in the following aspects: scene understanding, image enhancements and 3D reconstruction and modeling. In addressing scene understanding problem, we present a framework for semantic segmentation and object recognition on urban video sequence only using dense depth maps recovered from the video. Five view-independent 3D features that vary with object class are extracted from dense depth maps and used for segmenting and recognizing different object classes in street scene images. We demonstrate a scene parsing algorithm that uses only dense 3D depth information to outperform using sparse 3D or 2D appearance features. In addressing image enhancement problem, we present a framework to overcome the imperfections of personal photographs of tourist sites using the rich information provided by large-scale internet photo collections (IPCs). By augmenting personal 2D images with 3D information reconstructed from IPCs, we address a number of traditionally challenging image enhancement techniques and achieve high-quality results using simple and robust algorithms. In addressing 3D reconstruction and modeling problem, we focus on parametric modeling of flower petals, the most distinctive part of a plant. The complex structure, severe occlusions and wide variations make the reconstruction of their 3D models a challenging task. We overcome these challenges by combining data driven modeling techniques with domain knowledge from botany. Taking a 3D point cloud of an input flower scanned from a single view, each segmented petal is fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned 3D exemplar petals. Novel constraints based on botany studies are incorporated into the fitting process for realistically reconstructing occluded regions and maintaining correct 3D spatial relations. The main contribution of the dissertation is in the intelligent usage of 3D depth information on solving traditional challenging vision/graphics problems. By developing some advanced algorithms either automatically or with minimum user interaction, the goal of this dissertation is to demonstrate that computed 3D depth behind the multiple images contains rich information of the visual world and therefore can be intelligently utilized to recognize/ understand semantic meanings of scenes, efficiently enhance and augment single 2D images, and reconstruct high-quality 3D models.
22

Bednar, Chad Michael. "Parametric thermal modeling of switched reluctance and induction machines". Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53394.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This research focuses on the creation of a thermal estimator to be used in an integrated electromagnetic, thermo-mechanical design tool for the rapid optimal initial sizing of switched reluctance and induction machines. The switched reluctance model includes heat generation in the rotor due to core losses, heat transfer across the air gap through convection, and a heat transfer path through the shaft to ambient. Empirical Nusselt correlations for laminar shear flow, laminar flow with vortices and turbulent flow are used to estimate the convective heat transfer coefficient in the air gap. The induction model adds ohmic heat generation within the rotor bars of the machine as an additional rotor heat source. A parametric, self-segmenting mesh generation tool was created to capture the complex rotor geometries found within switched reluctance or induction machines. Modeling the rotor slot geometries in the R-θ polar coordinate system proved to be a key challenge in the work. Segmentation algorithms were established to model standard slot geometries including radial, rectangular (parallel-sided), circular and kite-shaped features in the polar coordinate system used in the R-θ solution plane. The center-node mesh generation tool was able optimize the size and number of nodes to accurately capture the cross sectional area of the feature, in the solution plane. The algorithms pursue a tradeoff between computational accuracy and computational speed by adopting a hybrid approach to estimate three dimensional effects. A thermal circuits approach links the R-θ finite difference solution to the three dimensional boundary conditions. The thermal estimator was able to accurately capture the temperature distribution in switched reluctance and induction machines as verified with experimental results.
23

Molinares, Carlos A. "Parametric and Bayesian Modeling of Reliability and Survival Analysis". Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3252.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The objective of this study is to compare Bayesian and parametric approaches to determine the best for estimating reliability in complex systems. Determining reliability is particularly important in business and medical contexts. As expected, the Bayesian method showed the best results in assessing the reliability of systems. In the first study, the Bayesian reliability function under the Higgins-Tsokos loss function using Jeffreys as its prior performs similarly as when the Bayesian reliability function is based on the squared-error loss. In addition, the Higgins-Tsokos loss function was found to be as robust as the squared-error loss function and slightly more efficient. In the second study, we illustrated that--through the power law intensity function--Bayesian analysis is applicable in the power law process. The power law intensity function is the key entity of the power law process (also called the Weibull process or the non-homogeneous Poisson process). It gives the rate of change of a system's reliability as a function of time. First, using real data, we demonstrated that one of our two parameters behaves as a random variable. With the generated estimates, we obtained a probability density function that characterizes the behavior of this random variable. Using this information, under the commonly used squared-error loss function and with a proposed adjusted estimate for the second parameter, we obtained a Bayesian reliability estimate of the failure probability distribution that is characterized by the power law process. Then, using a Monte Carlo simulation, we showed the superiority of the Bayesian estimate compared with the maximum likelihood estimate and also the better performance of the proposed estimate compared with its maximum likelihood counterpart. In the next study, a Bayesian sensitivity analysis was performed via Monte Carlo simulation, using the same parameter as in the previous study and under the commonly used squared-error loss function, using mean square error comparison. The analysis was extended to the second parameter as a function of the first, based on the relationship between their maximum likelihood estimates. The simulation procedure demonstrated that the Bayesian estimates are superior to the maximum likelihood estimates and that the selection of the prior distribution was sensitive. Secondly, we found that the proposed adjusted estimate for the second parameter has better performance under a noninformative prior. In the fourth study, a Bayesian approach was applied to real data from breast cancer research. The purpose of the study was to investigate the applicability of a Bayesian analysis to survival time of breast cancer data and to justify the applicability of the Bayesian approach to this domain. The estimation of one parameter, the survival function, and hazard function were analyzed. The simulation analysis showed that the Bayesian estimate of the parameter performed better compared with the estimated value under the Wheeler procedure. The excellent performance of the Bayesian estimate is reflected even for small sample sizes. The Bayesian survival function was also found to be more efficient than its parametric counterpart. In the last study, a Bayesian analysis was carried out to investigate the sensitivity to the choice of the loss function. One of the parameters of the distribution that characterized the survival times for breast cancer data was estimated applying a Bayesian approach and under two different loss functions. Also, the estimates of the survival function were determined under the same setting. The simulation analysis showed that the choice of the squared-error loss function is robust in estimating the parameter and the survival function.
24

Sarfo, Amponsah Eric. "Mathematical Modeling of Epidemics: Parametric Heterogeneity and Pathogen Coexistence". Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31862.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
No two species can indefinitely occupy the same ecological niche according to the competitive exclusion principle. When competing strains of the same pathogen invade a homogeneous population, the strain with the largest basic reproductive ratio R0 will force the other strains to extinction. However, over 51 pathogens are documented to have multiple strains [3] coexisting, contrary to the results from homogeneous models. In reality, the world is heterogeneous with the population varying in susceptibility. As such, the study of epidemiology, and hence the problem of pathogen coexistence should entail heterogeneity. Heterogeneous models tend to capture dynamics such as resistance to infection, giving more accurate results of the epidemics. This study will focus on the behavior of multi-pathogen heterogeneous models and will try to answer the question: what are the conditions on the model parameters that lead to pathogen coexistence? The goal is to understand the mechanisms in heterogeneous populations that mediate pathogen coexistence. Using the moment closure method, Fleming et. al. [22] used a two pathogen heterogeneous model (1.9) to show that pathogen coexistence was possible between strains of the baculovirus under certain conditions. In the first part of our study, we consider the same model using the hidden keystone variable (HKV) method. We show that under some conditions, the moment closure method and the HKV method give the same results. We also show that pathogen coexistence is possible for a much wider range of parameters, and give a complete analysis of the model (1.9), and give an explanation for the observed coexistence. The host population (gypsy moth) considered in the model (1.9) has a year life span, and hence, demography was introduced to the model using a discrete time model (1.12). In the second part of our study, we will consider a multi-pathogen compartmental heterogeneous model (3.1) with continuous time demography. We show using a Lyapunov function that pathogen coexistence is possible between multiple strains of the same pathogen. We provide analytical and numerical evidence that multiple strains of the same pathogen can coexist in a heterogeneous population.
25

Grimm, Alexander Rudolf. "Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83840.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay.
Ph. D.
26

Sunderland, Eric J. "Building Information Modeling and the Parametric Boundary of Design". University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1277136795.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Siddappaji, Kiran. "Parametric 3D Blade Geometry Modeling Tool for Turbomachinery Systems". University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337264652.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Krisztin, Tamás. "Semi-parametric spatial autoregressive models in freight generation modeling". Elsevier, 2018. https://publish.fid-move.qucosa.de/id/qucosa%3A72336.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes for the purposes of freight generation a spatial autoregressive model framework, combined with non-linear semi-parametric techniques. We demonstrate the capabilities of the model in a series of Monte Carlo studies. Moreover, evidence is provided for non-linearities in freight generation, through an applied analysis of European NUTS-2 regions. We provide evidence for significant spatial dependence and for significant non-linearities related to employment rates in manufacturing and infrastructure capabilities in regions. The non-linear impacts are the most significant in the agricultural freight generation sector.
29

Ma, Tao. "Genetic algorithm-based combinatorial parametric optimization for the calibration of traffic microscopic simulation models". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58769.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Miller, Jody Christopher. "Calibration and parametric study of the Alcator C-Mod charge exchange neutral particle analyzers". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38366.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Sanz, Estapé Gerard. "Demand modeling for water networks calibration and leak localization". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/393879.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The success in the application of any model-based methodology (e.g. design, control, supervision) highly depends on the availability of a well calibrated model. There is no best or unique solution for the calibration problem as the methodologies are developed depending on which parameters have to be calibrated and the final use of the model. The main objective in this thesis is to develop an adaptive water distribution network model which both calibrates its demands online and discerns between faults and system evolution. The calibration is focused on demands due to their daily variability and continuous evolution. The singular value decomposition is a powerful tool for solving the optimization problem. Additionally, the deep understanding of this tool allows to redefine the demand model. A novel demand model is proposed, where each individual demand is defined as a combination of demand components. These demand components are calibrated demand multipliers that represent the behavior of nodes in a determined geographical zone. The membership of each nodal demand to every demand component is produced naturally through the analysis of the singular value decomposition of the sensitivity matrix. The same analysis is also used to define the location of sensors for the calibration. The calibration in water distribution networks needs to be performed online due to the continuous evolution of demands. During the calibration process, background leakages or bursts can be unintentionally incorporated to the demand model and treated as a system evolution (change in demands). To solve that, a leak detection and localization approach to be coupled with the calibration methodology that identifies geographically distributed parameters is proposed. The approach consists in comparing the calibrated parameters with their historical values to assess if changes in these parameters are caused by a system evolution or by the effect of leakage. The geographical distribution allows to associate an unexpected behavior of the calibrated parameters (e.g. abrupt changes, trends, etc.) to a specific zone in the network. The set of methods proposed are exemplified through an academic dummy network to help the reader completely understand their fundamentals. Furthermore, three real water distribution networks situated in Barcelona and Castelldefels are used to evaluate the performance of the whole method with real systems and real data. The good results obtained show the potential of the developed method and the viability of the real-time calibration and leak detection and localization processes.
L'èxit en l'aplicació de qualsevol metodologia basada en models (p. ex. disseny, control, supervisió) depèn, en gran part, de la disponibilitat d'un model ben calibrat. No hi ha una solució única o global per aquest problema, ja que les metodologies es desenvolupen en funció de l'ús final del model. L'objectiu principal d'aquesta tesi és desenvolupar un model adaptatiu per xarxes de distribució d'aigua que calibri les seves demandes de forma online mentre distingeix entre fallades i evolució del sistema. La calibració es centra en les demandes degut a la seva variabilitat diària i a la seva evolució continua. La descomposició en valors singulars és una eina molt potent per resoldre problemes d'optimització. Addicionalment, la comprensió detallada d'aquesta eina permet redefinir el model de demandes. Es proposa un model de demandes innovador, on cada demanda individual es defineix com una combinació de components de demanda. Aquests components de demanda són multiplicadors de demanda que han estat calibrats, i que representen el comportament dels nodes en una zona geográfica determinada. La pertinença de cada demanda nodal a cadascun dels components de demanda es produeix de forma natural mitjanant l'anàlisi de la descomposició en valors singulars de la matriu de sensibilitat. El mateix anàlisi també s'utilitza per definir la localització dels sensors per la calibració. La calibració en xarxes de distribució d'aigua s'ha de realitzar en línia, ja que les demandes evolucionen contínuament. Durant el procés de calibració, les fuites latents o espontànies poden ser incorporades involuntàriament al model de demanda, i ser tractades com una evolució del sistema (canvi en les demandes). Per solucionar-ho, es proposa un mètode de detecció i localització de fuites que s'acobla a la metodologia de calibració que identifica components de demanda geogràfics. El mètode proposat consisteix en comparar els paràmetres calibrats amb els seus valors històrics per valorar si els canvis en aquests paràmetres es deuen a una evolució del sistema, o a l'efecte de les fuites. La distribució geogràfica permet associar un comportament no esperat dels paràmetres calibrats (p. ex. canvis sobtats, tendències, etc.) a una zona específica de la xarxa. El conjunt de mètodes proposats s'ha exemplificat mitjanant una xarxa acadèmica molt simple per ajudar al lector a entendre completament els seus fonaments. A més a més, s'han utilitzat tres xarxes de distribució d'aigua situades a Barcelona i Castelldefels per avaluar el funcionament del mètode complet amb sistemes i dades reals. Els bons resultats obtinguts mostren el potencial de la metodologia desenvolupada i la viabilitat de la calibració de demandes i detecció i localització de fuites en temps real.
32

Froicu, Dragos Vasile. "Modeling and learning-based calibration of residual current devices". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An important step in the manufacturing process of residual current devices consists of their calibration. The latter is a time-consuming procedure necessary for the proper operation of these devices. The main goal of this document is to propose a solution to increase the efficiency of the calibration workstations, by reducing the overall time of this manufacturing step. To successfully achieve this goal, a much more accurate model is needed compared to the one being used nowadays. The system under study is dominated by a huge amount of uncertainty. This is due to the high number of parameters involved, each of them characterized by very large tolerances. In the approach being used here, the governing physical equations have been integrated with a Bayesian learning process. By doing that, the benefits concerning the knowledge of the underlying physics are combined with the uncertainty estimation provided by the stochastic model resulting in a more robust and accurate model. The Bayesian learning method used in this study case regards Gaussian Process modeling, starting from a physically-based prior and updating it as data is being observed. This is repeated for every device that needs to be calibrated. The result is an adaptive modeling procedure that can be easily implemented and used directly in the manufacturing process to achieve a faster calibration and decrease the overall process time. The estimated improvement of the proposed solution compared to the one being used nowadays is about 24% fewer calibration attempts on average. The encouraging results that have been obtained in the simulation have prompted implementing and testing it on the real process.
33

Cerezo, Davila Carlos. "Building archetype calibration for effective urban building energy modeling". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111487.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: Ph. D. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, 2017.
Cataloged from PDF version of thesis. Page 156 blank.
Includes bibliographical references (pages 143-152).
In response to the current environmental challenges, city governments worldwide are developing action plans to both reduce GHG emissions and increase the resilience of their built environment. Given the relevance of energy use in buildings, such plans introduce a variety of efficiency and supply planning strategies ranging from the scale of buildings, to full districts. Their implementation requires information about current building energy demands, and how these demands, and the city's energy ecosystem at large, may change as a result of a specific urban intervention. Unfortunately, metered data is not available at a sufficient level of detail, and cities face an "information gap" between the aggregate scale of their emission targets, and the scale of implementation of energy strategies. To close it, municipalities and other interested stakeholders require modeling tools that provide accurate spatially and temporally defined energy demands by building. Urban Building Energy Models (UBEMs) have been proposed in research as a bottom-up, physics based, urban modeling technique, to estimate energy demands by building for current conditions and future scenarios. Given the large number of data inputs required in their generation, UBEMs have relied on their characterization through "building archetypes". Yet, in the absence of detailed building and energy data, this process has remained somewhat arbitrary, relying on deterministic assumptions and the subjective judgement of the modeler. The resulting simplification can potentially lead to predictions that misrepresent urban demands and misinform decision makers. In order to address these limitations and enable the large scale application of UBEMs, this dissertation introduces a set of modeling and calibration techniques. First, in order to demonstrate the feasibility of citywide municipal UBEMs, an 80,000 buildings model is generated and simulated for the city of Boston, based exclusively on currently available and maintained municipal datasets. An automated modeling workflow and a new library file format for archetypes are developed for that purpose, and current limitations of municipal datasets and practices are identified. To improve the reliability of UBEMs in reproducing metered demands, a new calibration approach is proposed, which applies principles of Bayesian statistics to reduce the uncertainty in archetype parameters defined stochastically based on a sample of metered buildings. The method is demonstrated and validated in the model of a residential district in Kuwait with 323 annually metered buildings, showing errors below 5% in the mean and 15% in the variance when compared with the measured EUI distribution. The accuracy of the resulting UBEM when reproducing EUI distributions is also compared with typical deterministic approaches, resulting in an error improvement of 30-40%. The method is expanded for its application when monthly energy data is available, and applied for the calibration of a sample including 2,662 residential buildings in Cambridge, MA. Finally, the relevance of calibrated archetype-based UBEMs in urban decisions is discussed from the perspectives of policy makers, energy providers, urban designers and real estate owners in two application cases in neighborhoods of Kuwait City and Boston.
by Carlos Cerezo Davila.
Ph. D. in Building Technology
34

Chang, Mun Kee. "Modeling and calibration of an acoustic emission measurement system". Thesis, Massachusetts Institute of Technology, 1990. https://hdl.handle.net/1721.1/128801.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1990.
Includes bibliographical references (leaves 128-134).
by Mun Kee Chang.
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1990.
35

McBrayer, Mickey Charles. "Calibration of groundwater flow models for modeling and teaching /". Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Honegger, Ueli. "Gas turbine combustion modeling for a Parametric Emissions Monitoring System". Thesis, Manhattan, Kan. : Kansas State University, 2007. http://hdl.handle.net/2097/371.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Teodorescu, Iuliana. "Optimization in non-parametric survival analysis and climate change modeling". Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4782.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many of the open problems of current interest in probability and statistics involve complicated data sets that do not satisfy the strong assumptions of being independent and identically distributed. Often, the samples are known only empirically, and making assumptions about underlying parametric distributions is not warranted by the insufficient information available. Under such circumstances, the usual Fisher or parametric Bayes approaches cannot be used to model the data or make predictions. However, this situation is quite often encountered in some of the main challenges facing statistical, data-driven studies of climate change, clinical studies, or financial markets, to name a few. We propose a novel approach, based on large deviations theory, convex optimization, and recent results on surrogate loss functions for classifier-type problems, that can be used in order to estimate the probability of large deviations for complicated data. This may include, for instance, highdimensional data, highly-correlated data, or very sparse data. The thesis introduces the new approach, reviews the current known theoretical results, and then presents a number of numerical explorations meant to quantify how far the approximation of survival functions via large deviations principle can be taken, once we leave the limitations imposed by the existing theoretical results. The explorations are encouraging, indicating that indeed the new approximation scheme may be very efficient and can be used under much more general conditions than those warranted by the current theoretical thresholds. After applying the new methodology to two important contemporary problems (atmospheric CO2 data and El Ni~no/La Ni~na phenomena), we conclude with a summary outline of possible further research.
38

Mburu, Fred Andrew (Fred Andrew Kimemia) 1971. "Context modeling : extending the parametric object model with design context". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8604.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2001.
Includes bibliographical references (p. 57, 59).
Context can be described as the totality of ideas, situations and information that (a) related to, (b) provide the origins for, and (c) influence our response, perspective or judgment of a thing. Design always takes place in a context. However, current Computer Aided Architectural Design (CAAD) systems don't have a way to represent design knowledge associated with context. This thesis presents a computational model, called Context Modeling System (CMS), in which design context is modeled. Using this model, designers can define and prioritize design context. A prototype, based on CMS and rule-based systems in the field of Artificial Intelligence, is also presented.
by Fred Andrew Mburu.
S.M.
39

Austin, Charles B. M. Arch Massachusetts Institute of Technology. "Cellular building components : investigation into parametric modeling and production logics". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33602.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M. Arch.)--Massachusetts Institute of Technology, Dept. of Architecture, 2005.
MIT Institute Archives copy: P. 85-86 bound in reverse order.
Includes bibliographical references (p. 86).
Recent advances in digital fabrication technologies have sparked a renewed interest in topology and biological form. The ability to design and prototype structural forms inspired by nature has challenged architects preconceived notions of space and form. With the assistance of parametric modeling and rapid prototyping we now not only have the ability to physically generate complex forms, but also the ability to create a seemingly infinite number of formal variations. As a result, this has caused architects to push toward new spatial concepts. Among these new spatial concepts are those that seek to create entire building systems out of a single material solution. Inspiration for such systems can be found by studying organic cellular structures. Unlike the component based design processes of most architects, in which multiple problems are solved through multiple material solutions, natural systems tend to create solutions that solve multiple problems through one material solution. This thesis is interested in answering the question, "Is it possible to create a building system (both structure and enclosure) out of a single adaptable building unit?" Furthermore, can the building unit also be capable of transforming from being either permeable to impermeable? If so, how might this challenge our existing notions of boundaries?
by Charles B. Austin.
M.Arch.
40

Sutton, Daniel. "Improved Reduced Order Modeling Strategies for Coupled and Parametric Systems". Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/34639.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis uses Proper Orthogonal Decomposition to model parametric and coupled systems. First, Proper Orthogonal Decomposition and its properties are introduced as well as how to numerically compute the decomposition. Next, a test case was used to show how well POD can be used to simulate and control a system. Finally, techniques for modeling a parametric system over a given range and a coupled system split into subdomains were explored, as well as numerical results.
Master of Science
41

ROBINSON, DAVID GERALD. "MODELING RELIABILITY IMPROVEMENT DURING DESIGN (RELIABILITY GROWTH, BAYES, NON PARAMETRIC)". Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183971.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Past research into the phenomenon of reliability growth has emphasised modeling a major reliability characteristic in terms of a specific parametric function. In addition, the time-to-failure distribution of the system was generally assumed to be exponential. The result was that in most cases the improvement was modeled as a nonhomogeneous Poisson process with intensity λ(t). Major differences among models centered on the particular functional form of the intensity function. The popular Duane model, for example, assumes that λ(t) = β(1 – α)t ⁻ᵅ. The inability of any one family of distributions or parametric form to describe the growth process resulted in a multitude of models, each directed toward answering problems encountered with a particular test situation. This thesis proposes two new growth models, neither requiring the assumption of a specific function to describe the intensity λ(t). Further, the first of the models only requires that the time-to-failure distribution be unimodal and that the reliability become no worse as development progresses. The second model, while requiring the assumption of an exponential failure distribution, remains significantly more flexible than past models. Major points of this Bayesian model include: (1) the ability to encorporate data from a number of test sources (e.g. engineering judgement, CERT testing, etc.), (2) the assumption that the failure intensity is stochastically decreasing, and (3) accountability of changes that are incorporated into the design after testing is completed. These models were compared to a number of existing growth models and found to be consistently superior in terms of relative error and mean-square error. An extension to the second model is also proposed that allows system level growth analysis to be accomplished based on subsystem development data. This is particularly significant, in that, as systems become larger and more complex, development efforts concentrate on subsystem levels of design. No analysis technique currently exists that has this capability. The methodology is applied to data sets from two actual test situations.
42

COLUMBU, SILVIA. "Parametric modeling of dependence of bivariate quantile regression residuals' signs". Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266587.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, we propose a non-parametric method to study the dependence of the quantiles of a multivariate response conditional on a set of covariates. We define a statistic that measures the conditional probability of concordance of the signs of the residuals of the conditional quantiles of each univariate response. The probability of concordance is bounded from below by the value of largest possible negative dependence and from above by that of largest possible positive dependence. The value corresponding to the case of independence is contained in the interior of that interval. We recommend two distinct regression methods to model the conditional probability of concordance. The first is a logistic regression with a logit link modified. The second one is a nonlinear regression method, where the outcome is modeled as a polynomial function of the linear predictor. Both are conceived to constrain the predicted probabilities to lie within the feasible range. The estimated probabilities can be tested against the values of largest possible dependence and independence. The method permits to capture important aspects of the dependence of multivariate responses and assess possible effects of covariates on such dependence. We use data on pulmonary disfunctions to illustrate the potential of the proposed method. We suggest also graphical tools for a correct interpretation of results.
43

Han, Lu. "A Concurrent Physical and Digital Modeling Environment / Exploring Tactile and Parametric Interactions in Design Modeling". Research Showcase @ CMU, 2017. http://repository.cmu.edu/theses/129.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis explores the potential of a concurrent physical and digital modeling environment. Inspired by constructionist notions of embodied cognition in design, a novel interface for design modeling is presented where designers can take advantage of the affordances of both physical and digital modeling environments, and work back and forth between the two. Using Processing, along with the Kinect depth sensor, the system uses depth data read from a physical modeling space to produce an enhanced digital representation in real time. The result is a proof-of-concept concurrent physical and digital modeling environment where users can design by moving and stacking wooden blocks in a physical space, which is represented (and enhanced) digitally as a “voxel space.” Crucially, the system combines design affordances specific to each media: while the physical space offers tactile and embodied forms of design interaction, the digital space offers different views and parametric editing capabilities —as well as save configuration, and the capacity to perform basic analyses. Following a short review of experimental computational and tangible interaction design interfaces, the thesis discusses the system's implementation, its limitations, and next steps.
44

Amiri, Parian Jafar Parian Jafar Amiri. "Sensor modeling, calibration and point positioning with terrestrial panoramic cameras /". [S.l.] : [s.n.], 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17094.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Vogt, Florian. "Towards an interactive framework for upper airway modeling : integration of acoustic, biomechanic, and parametric modeling methods". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7799.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The human upper airway anatomy consists of the jaw, tongue, pharynx, larynx, palate, nasal cavities, nostrils, lips, and adjacent facial structures. It plays a central role in speaking, mastication, breathing, and swallowing. The interplay and correlated movements between all the anatomical structures are complex and basic physiological functions, such as the muscle activation patterns associated with chewing or swallowing, are not well understood. This work creates a modeling framework as a bridge between such disciplines as linguistics, dentistry, biomechanics, and acoustics to enable the integration of physiological knowledge with interactive simulation methods. This interactive model of the upper airway system allows better understanding of the anatomical structures and their overall function. A three-dimensional computational modeling framework is proposed to mimic the behavior of the upper airway anatomy as a system by combining biomechanic, parametric, and acoustic modeling methods. Graphical user interface components enable the interactive manipulation of models and orchestration of physiological functions. A three-dimensional biomechanical tongue model is modified as a reference model of the modeling framework to demonstrate integration of an existing model and to enable interactivity and validation procedures. Interactivity was achieved by introducing a general-purpose fast linear finite element muscle model. Feasible behavior of the biomechanical tongue model is ensured by comparison with a reference model and matching the model to medical image data. In addition to the presented generic tongue model, individual differences in shape and function are important for clinical applications. Different medical image modalities may jointly enable guidance of the creation on individuals’ anatomy models. Automatic methods to extract shape and function are investigated to demonstrate the feasibility of upper airway image-based modeling for this modeling framework. This work may be continued in many other directions to simulate the upper airway for speaking, breathing, and swallowing. For example, progress has already been made to develop a complete vocal tract model whereby the tongue model, jaw model, and acoustic airway are connected. Further, it is planned to apply the same tissue modeling methods to represent other muscle groups and model the interaction with other anatomical substructures of the vocal tract such as the face, lips and soft palate.
46

Branscum, Adam Jacob. "Epidemiologic modeling and data analysis : Bayesian parametric, nonparametric, and semiparametric approaches /". For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Karademir, Salahaddin Mirac. "A Parametric Study On Three Dimensional Modeling Of Parallel Tunnel Interactions". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612462/index.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A parametric study is performed to investigate the parallel tunnel interaction. Three dimensional finite element analyses were performed to determine the effects of soil stiffness, pillar width and advancement level of the second tunnel on the behaviour of displacement, bending moment and shear force of the previously constructed tunnel. In the analysis PLAXIS 3D Tunnel geotechnical finite element package was used. This program allows the user to define the actual construction stages of a NATM tunnel construction. In the analysis, construction stages are defined in such a way that firstly one of the tunnels is constructed and the construction of the second tunnel starts after the construction of the first tunnel. The mid-length section of the first tunnel is investigated in six different locations and at seven different advancement levels in terms of displacement, bending moment and shear forces. It is found that, displacement and bending moment behaviour are more related with soil stiffness and pillar width than the behaviour of shear forces. While the level of advancement of the second tunnel causes different type of responses on the shear force behaviour, level of advancement does not affect the type of behaviour of displacements and bending moments. Another finding of the research is that pillar width has an evident influence on the behaviour of displacements and bending moment than the soil stiffness. It is also found that the interaction effect may be eliminated by increasing the pillar width equal or larger than an approximate value of 2.5 &ndash
3.0 D (diameter) for an average soil stiffness value.
48

Heinz, Daniel. "Hyper Markov Non-Parametric Processes for Mixture Modeling and Model Selection". Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Markov distributions describe multivariate data with conditional independence structures. Dawid and Lauritzen (1993) extended this idea to hyper Markov laws for prior distributions. A hyper Markov law is a distribution over Markov distributions whose marginals satisfy the same conditional independence constraints. These laws have been used for Gaussian mixtures (Escobar, 1994; Escobar and West, 1995) and contingency tables (Liu and Massam, 2006; Dobra and Massam, 2009). In this paper, we develop a family of non-parametric hyper Markov laws that we call hyper Dirichlet processes, combining the ideas of hyper Markov laws and non-parametric processes. Hyper Dirichlet processes are joint laws with Dirichlet process laws for particular marginals. We also describe a more general class of Dirichlet processes that are not hyper Markov, but still contain useful properties for describing graphical data. The graphical Dirichlet processes are simple Dirichlet processes with a hyper Markov base measure. This class allows an extremely straight-forward application of existing Dirichlet knowledge and technology to graphical settings. Given the wide-spread use of Dirichlet processes, there are many applications of this framework waiting to be explored. One broad class of applications, known as Dirichlet process mixtures, has been used for constructing mixture densities such that the underlying number of components may be determined by the data (Lo, 1984; Escobar, 1994; Escobar and West, 1995). I consider the use of the new graphical Dirichlet process in this setting, which imparts a conditional independence structure inside each component. In other words, given the component or cluster membership, the data exhibit the desired independence structure. We discuss two applications. Expanding on the work of Escobar and West (1995), we estimate a non-parametric mixture of Markov Gaussians using a Gibbs sampler. Secondly, we employ the Mode-Oriented Stochastic Search of Dobra and Massam (2009) for determining a suitable conditional independence model, focusing on contingency tables. In general, the mixing induced by a Dirichlet process does not drastically increase the complexity beyond that of a simpler Bayesian hierarchical models sans mixture components. We provide a specific representation for decomposable graphs with useful algorithms for local updates.
49

Bischof, Ryan. "A Parametric Framework for Modeling and Manufacturing an Ant Neck Joint". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1598193155252658.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Garber, Emily Ann. "Chamber Hall Threshold Design and Acoustic Surface Shaping with Parametric Modeling". Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/32928.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The architectural opportunity to develop the sound and light lock of a performance venue as a space that engages and prepares the audience for a performance is one that is sadly missing from most halls. I have explored the development of this threshold as a true architectural space, one that enhances the overall experience for the audience members. And by introducing a parametric process into the architectural and acoustic development, have proposed a unique process for the design of concert halls. From physical model building to analysis by computer simulation, digital technology has undoubtedly advanced the realm of acoustic prediction. But common computer prediction programs that exist today are still essentially digitized applications of the analog model building process. Being: construct a model, analyze, make adjustments and repeat until the desired results are achieved. By implementing a parametric approach to model building it allows for design changes and the significance of those changes to be recognized in real time, an invaluable tool in the development of a sound-sensitive space. Utilizing the 3D software Rhinoceros and its scripting plug-in Grasshopper, it becomes possible to easily visualize crucial first-order reflections relative to surfaces that can be controlled and manipulated in very precise ways. This software is becoming more popular amongst architects and designers, and the prediction process will be an extension of this software into the field of acoustics. By using software already in the design vernacular, there is a seamless transition between design and analysis, making for a more cohesive project
Master of Architecture

Vai alla bibliografia