Siga este enlace para ver otros tipos de publicaciones sobre el tema: Weighted simulation.

Tesis sobre el tema "Weighted simulation"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Weighted simulation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Shah, Sandeep R. "Perfect simulation of conditional and weighted models". Thesis, University of Warwick, 2004. http://wrap.warwick.ac.uk/59406/.

Texto completo
Resumen
This thesis is about probabilistic simulation techniques. Specifically we consider the exact or perfect sampling of spatial point process models via the dominated CFTP protocol. Fundamental among point process models is the Poisson process, which formalises the notion of complete spatial randomness; synonymous with the Poisson process is the Boolean model. The models treated here are the conditional Boolean model and the area-interaction process. The latter is obtained by weighting a Poisson process according to the area of its associated Boolean model. A fundamental tool employed in the perfect simulation of point processes are spatial birth-death processes. Perfect sampling algorithms for the conditional Boolean and area-interaction models are described. Birth-death processes are also employed in order to develop an exact omnithermal algorithm for the area-interaction process. This enables the simultaneous sampling of the process for a whole range of parameter values using a single realization. A variant of Rejection sampling, namely 2-Stage Rejection, and exact Gibbs samplers for the conditional Boolean and area-interaction processes are also developed here. A quantitative comparison of the methods employing 2-Stage Rejection, spatial birth-death processes and Gibbs samplers is carried, the performance measured by actual run times of the algorithms. Validation of the perfect simulation algorithms is carried out via x2 tests.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Graham, Mark. "The development and application of a simulation system for diffusion-weighted MRI". Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10047351/.

Texto completo
Resumen
Diffusion-weighted MRI (DW-MRI) is a powerful, non-invasive imaging technique that allows us to infer the structure of biological tissue. It is particularly well suited to the brain, and is used by clinicians and researchers studying its structure in health and disease. High quality data is required to accurately characterise tissue structure with DW-MRI. Obtaining such data requires the careful optimisation of the image acquisition and processing pipeline, in order to maximise image quality and minimise artefacts. This thesis extends an existing MRI simulator to create a simulation system capable of producing realistic DW-MR data, with artefacts, and applies it to improve the acquisition and processing of such data. The simulator is applied in three main ways. Firstly, a novel framework for evaluating post-processing techniques is proposed and applied to assess commonly used strategies for the correction of motion, eddy-current and susceptibility artefacts. Secondly, it is used to explore the often overlooked susceptibility-movement interaction. It is demonstrated that this adversely impacts analysis of DW-MRI data, and a simple modification to the acquisition scheme is suggested to mitigate its impact. Finally, the simulation is applied to develop a new tool to perform automatic quality control. Simulated data is used to train a classifier to detect movement artefacts in data, with performance approaching that of a classifier trained on real data whilst requiring much less manually-labelled training data. It is hoped that both the findings in this thesis and the simulation tool itself will benefit the DW-MRI community. To this end, the tool is made freely available online to aid the development and validation of methods for acquiring and processing DW-MRI data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Giacalone, Marco. "Lambda_c detection using a weighted Bayesian PID approach". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11431/.

Texto completo
Resumen
Lo scopo della tesi è di stimare le prestazioni del rivelatore ALICE nella rivelazione del barione Lambda_c nelle collisioni PbPb usando un approccio innovativo per l'identificazione delle particelle. L'idea principale del nuovo approccio è di sostituire l'usuale selezione della particella, basata su tagli applicati ai segnali del rivelatore, con una selezione che usi le probabilità derivate dal teorema di Bayes (per questo è chiamato "pesato Bayesiano"). Per stabilire quale metodo è il più efficiente , viene presentato un confronto con altri approcci standard utilizzati in ALICE. Per fare ciò è stato implementato un software di simulazione Monte Carlo "fast", settato con le abbondanze di particelle che ci si aspetta nel nuovo regime energetico di LHC e con le prestazioni osservate del rivelatore. E' stata quindi ricavata una stima realistica della produzione di Lambda_c, combinando i risultati noti da esperimenti precedenti e ciò è stato usato per stimare la significatività secondo la statistica al RUN2 e RUN3 dell'LHC. Verranno descritti la fisica di ALICE, tra cui modello standard, cromodinamica quantistica e quark gluon plasma. Poi si passerà ad analizzare alcuni risultati sperimentali recenti (RHIC e LHC). Verrà descritto il funzionamento di ALICE e delle sue componenti e infine si passerà all'analisi dei risultati ottenuti. Questi ultimi hanno mostrato che il metodo risulta avere una efficienza superiore a quella degli usuali approcci in ALICE e che, conseguentemente, per quantificare ancora meglio le prestazioni del nuovo metodo si dovrebbe eseguire una simulazione "full", così da verificare i risultati ottenuti in uno scenario totalmente realistico.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Potgieter, Andrew. "A Parallel Multidimensional Weighted Histogram Analysis Method". Thesis, University of Cape Town, 2014. http://pubs.cs.uct.ac.za/archive/00000986/.

Texto completo
Resumen
The Weighted Histogram Analysis Method (WHAM) is a technique used to calculate free energy from molecular simulation data. WHAM recombines biased distributions of samples from multiple Umbrella Sampling simulations to yield an estimate of the global unbiased distribution. The WHAM algorithm iterates two coupled, non-linear, equations, until convergence at an acceptable level of accuracy. The equations have quadratic time complexity for a single reaction coordinate. However, this increases exponentially with the number of reaction coordinates under investigation, which makes multidimensional WHAM a computationally expensive procedure. There is potential to use general purpose graphics processing units (GPGPU) to accelerate the execution of the algorithm. Here we develop and evaluate a multidimensional GPGPU WHAM implementation to investigate the potential speed-up attained over its CPU counterpart. In addition, to avoid the cost of multiple Molecular Dynamics simulations and for validation of the implementations we develop a test system to generate samples analogous to Umbrella Sampling simulations. We observe a maximum problem size dependent speed-up of approximately 19 for the GPGPU optimized WHAM implementation over our single threaded CPU optimized version. We find that the WHAM algorithm is amenable to GPU acceleration, which provides the means to study ever more complex molecular systems in reduced time periods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Simmler, Urs. "Simulation-News in Creo 1.0 & 2.0 & 3.0 : weighted Links : "Tipps & Tricks"". Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114511.

Texto completo
Resumen
- Rückblick Simulation-News in Creo 1.0 & Creo 2.0 - Ausblick Simulation-News in Creo 3.0 - Gewichtete Verbindungen: „Tips & Tricks“ mit konkreten Beispielen: o Lagersteifigkeiten (z.B. Wälzlager) o Mechanismus Verbindungen (Dreh-, Schub-, Zylinder, .... Gelenke) o Vorgespannte Schrauben (mit Schalen-/Balken-Elementen) o Aufbringung einer momentfreien Zwangsverschiebung o „Gesamtlast auf Punkt“: Messen der Punktverschiebung o Verbinden von Massen-Elementen o Verhindern von Singularitäten - Live-Präsentation in Creo 2.0 o Lagersteifigkeiten (z.B. Wälzlager) o Mechanismus Verbindungen (Dreh-, Schub-, Zylinder, .... Gelenke)
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kamunge, Daniel. "A non-linear weighted least squares gas turbine diagnostic approach and multi-fuel performance simulation". Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/5612.

Texto completo
Resumen
The gas turbine which has found numerous applications in Air, Land and Sea applications, as a propulsion system, electricity generator and prime mover, is subject to deterioration of its individual components. In the past, various methodologies have been developed to quantify this deterioration with varying degrees of success. No single method addresses all issues pertaining to gas turbine diagnostics and thus, room for improvement exists. The first part of this research investigates the feasibility of non-linear W eighted Least Squares as a gas turbine component deterioration quantification tool. Two new weighting schemes have been developed to address measurement noise. Four cases have been run to demonstrate the non-linear weighted least squares method, in conjunction with the new weighting schemes. Results demonstrate that the non-linear weighted least squares method effectively addresses measurement noise and quantifies gas path component faults with improved accuracy over its linear counterpart and over methods that do not address measurement noise. Since Gas turbine diagnostics is based on analysis of engine performance at given ambient and power setting conditions; accurate and reliable engine performance modelling and simulation models are essential for meaningful gas turbine diagnostics. The second part of this research therefore sought to develop a multi-fuel and multi-caloric simulation method with the view of improving simulation accuracy. The method developed is based on non-linear interpolation of fuel tables. Fuel tables for Jet-A, UK Natural gas, Kerosene and Diesel were produced. Six case studies were carried out and the results demonstrate that the method has significantly improved accuracy over linear interpolation based methods and methods that assume thermal perfection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Landon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Xu, Zhouyi. "Stochastic Modeling and Simulation of Gene Networks". Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/645.

Texto completo
Resumen
Recent research in experimental and computational biology has revealed the necessity of using stochastic modeling and simulation to investigate the functionality and dynamics of gene networks. However, there is no sophisticated stochastic modeling techniques and efficient stochastic simulation algorithms (SSA) for analyzing and simulating gene networks. Therefore, the objective of this research is to design highly efficient and accurate SSAs, to develop stochastic models for certain real gene networks and to apply stochastic simulation to investigate such gene networks. To achieve this objective, we developed several novel efficient and accurate SSAs. We also proposed two stochastic models for the circadian system of Drosophila and simulated the dynamics of the system. The K-leap method constrains the total number of reactions in one leap to a properly chosen number thereby improving simulation accuracy. Since the exact SSA is a special case of the K-leap method when K=1, the K-leap method can naturally change from the exact SSA to an approximate leap method during simulation if necessary. The hybrid tau/K-leap and the modified K-leap methods are particularly suitable for simulating gene networks where certain reactant molecular species have a small number of molecules. Although the existing tau-leap methods can significantly speed up stochastic simulation of certain gene networks, the mean of the number of firings of each reaction channel is not equal to the true mean. Therefore, all existing tau-leap methods produce biased results, which limit simulation accuracy and speed. Our unbiased tau-leap methods remove the bias in simulation results that exist in all current leap SSAs and therefore significantly improve simulation accuracy without sacrificing speed. In order to efficiently estimate the probability of rare events in gene networks, we applied the importance sampling technique to the next reaction method (NRM) of the SSA and developed a weighted NRM (wNRM). We further developed a systematic method for selecting the values of importance sampling parameters. Applying our parameter selection method to the wSSA and the wNRM, we get an improved wSSA (iwSSA) and an improved wNRM (iwNRM), which can provide substantial improvement over the wSSA in terms of simulation efficiency and accuracy. We also develop a detailed and a reduced stochastic model for circadian rhythm in Drosophila and employ our SSA to simulate circadian oscillations. Our simulations showed that both models could produce sustained oscillations and that the oscillation is robust to noise in the sense that there is very little variability in oscillation period although there are significant random fluctuations in oscillation peeks. Moreover, although average time delays are essential to simulation of oscillation, random changes in time delays within certain range around fixed average time delay cause little variability in the oscillation period. Our simulation results also showed that both models are robust to parameter variations and that oscillation can be entrained by light/dark circles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Klann, Dirk. "The Role of Information Technology in the Airport Business: A Retail-Weighted Resource Management Approach for Capacity-Constrained Airports". Thesis, Cranfield University, 2009. http://hdl.handle.net/1826/4474.

Texto completo
Resumen
Much research has been undertaken to gain insight into business alignment of IT. This alignment basically aims to improve a firm’s performance by an improved harmonization of the business function and the IT function within a firm. The thesis discusses previous approaches and constructs an overall framework, which a potential approach needs to fit in. Being in a highly regulated industry, for airports there is little space left to increase revenues. However, the retailing business has proven to be an area that may contribute towards higher income for airport operators. Consequently, airport management should focus on supporting this business segment. Nevertheless, it needs to be taken into account that smooth airport operations are a precondition for successful retailing business at an airport. Applying the concept of information intensity, the processes of gate allocation and airport retailing have been determined to appraise the potential that may be realized upon (improved) synchronization of the two. It has been found that the lever is largest in the planning phase (i.e. prior to operations), and thus support by means of information technology (for information distribution and improved planning) may help to enable an improved overall retail performance. In order to determine potential variables, which might influence the output, a process decomposition has been conducted along with the development of an appropriate information model. The derived research model has been tested in different scenarios. For this purpose an adequate gate allocation algorithm has been developed and implemented in a purposewritten piece of software. To calibrate the model, actual data (several hundred thousand data items from Frankfurt Airport) from two flight plan seasons has been used. Key findings: The results show that under the conditions described it seems feasible to increase retail sales in the magnitude of 9% to 21%. The most influential factors (besides the constraining rule set and a retail area’s specific performance) proved to be a flight’s minimum and maximum time at a gate as well as its buffer time at gate. However, as some of the preconditions may not be accepted by airport management or national regulators, the results may be taken as an indication for cost incurred, in case the suggested approach is not considered. The transferability to other airport business models and limitations of the research approach are discussed at the end along with suggestions for future areas of research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pant, Mohan Dev. "Simulating Univariate and Multivariate Burr Type III and Type XII Distributions Through the Method of L-Moments". OpenSIUC, 2011. https://opensiuc.lib.siu.edu/dissertations/401.

Texto completo
Resumen
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and heavy-tailed non-normal distributions. Conventional moment-based estimators (i.e., the mean, variance, skew, and kurtosis) are traditionally used to characterize the distribution of a random variable or in the context of fitting data. However, conventional moment-based estimators can (a) be substantially biased, (b) have high variance, or (c) be influenced by outliers. In view of these concerns, a characterization of the Burr Type III and Type XII distributions through the method of L-moments is introduced. Specifically, systems of equations are derived for determining the shape parameters associated with user specified L-moment ratios (e.g., L-Skew and L-Kurtosis). A procedure is also developed for the purpose of generating non-normal Burr Type III and Type XII distributions with arbitrary L-correlation matrices. Numerical examples are provided to demonstrate that L-moment based Burr distributions are superior to their conventional moment based counterparts in the context of estimation, distribution fitting, and robustness to outliers. Monte Carlo simulation results are provided to demonstrate that L-moment-based estimators are nearly unbiased, have relatively small variance, and are robust in the presence of outliers for any sample size. Simulation results are also provided to show that the methodology used for generating correlated non-normal Burr Type III and Type XII distributions is valid and efficient. Specifically, Monte Carlo simulation results are provided to show that the empirical values of L-correlations among simulated Burr Type III (and Type XII) distributions are in close agreement with the specified L-correlation matrices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Can, Mutan Oya. "Comparison Of Regression Techniques Via Monte Carlo Simulation". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605175/index.pdf.

Texto completo
Resumen
The ordinary least squares (OLS) is one of the most widely used methods for modelling the functional relationship between variables. However, this estimation procedure counts on some assumptions and the violation of these assumptions may lead to nonrobust estimates. In this study, the simple linear regression model is investigated for conditions in which the distribution of the error terms is Generalised Logistic. Some robust and nonparametric methods such as modified maximum likelihood (MML), least absolute deviations (LAD), Winsorized least squares, least trimmed squares (LTS), Theil and weighted Theil are compared via computer simulation. In order to evaluate the estimator performance, mean, variance, bias, mean square error (MSE) and relative mean square error (RMSE) are computed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Nagahara, Shizue. "Studies on Functional Magnetic Resonance Imaging with Higher Spatial and Temporal Resolutions". 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188540.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Ruiz, Fernández Guillermo. "3D reconstruction for plastic surgery simulation based on statistical shape models". Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/667049.

Texto completo
Resumen
This thesis has been accomplished in Crisalix in collaboration with the Universitat Pompeu Fabra within the program of Doctorats Industrials. Crisalix has the mission of enhancing the communication between professionals of plastic surgery and patients by providing a solution to the most common question during the surgery planning process of ``How will I look after the surgery?''. The solution proposed by Crisalix is based in 3D imaging technology. This technology generates the 3D reconstruction that accurately represents the area of the patient that is going to be operated. This is followed by the possibility of creating multiple simulations of the plastic procedure, which results in the representation of the possible outcomes of the surgery. This thesis presents a framework capable to reconstruct 3D shapes of faces and breasts of plastic surgery patients from 2D images and 3D scans. The 3D reconstruction of an object is a challenging problem with many inherent ambiguities. Statistical model based methods are a powerful approach to overcome some of these ambiguities. We follow the intuition of maximizing the use of available prior information by introducing it into statistical model based methods to enhance their properties. First, we explore Active Shape Models (ASM) which are a well known method to perform 2D shapes alignment. However, it is challenging to maintain prior information (e.g. small set of given landmarks) unchanged once the statistical model constraints are applied. We propose a new weighted regularized projection into the parameter space which allows us to obtain shapes that at the same time fulfill the imposed shape constraints and are plausible according to the statistical model. Second, we extend this methodology to be applied to 3D Morphable Models (3DMM), which are a widespread method to perform 3D reconstruction. However, existing methods present some limitations. Some of them are based in non-linear optimizations computationally expensive that can get stuck in local minima. Another limitation is that not all the methods provide enough resolution to represent accurately the anatomy details needed for this application. Given the medical use of the application, the accuracy and robustness of the method, are important factors to take into consideration. We show how 3DMM initialization and 3DMM fitting can be improved using our weighted regularized projection. Finally, we present a framework capable to reconstruct 3D shapes of plastic surgery patients from two possible inputs: 2D images and 3D scans. Our method is used in different stages of the 3D reconstruction pipeline: shape alignment; 3DMM initialization and 3DMM fitting. The developed methods have been integrated in the production environment of Crisalix, proving their validity.
Aquesta tesi ha estat realitzada a Crisalix amb la col·laboració de la Universitat Pompeu Fabra sota el pla de Doctorats Industrials. Crisalix té com a objectiu la millora de la comunicació entre els professionals de la cirurgia plàstica i els pacients, proporcionant una solució a la pregunta que sorgeix més freqüentment durant el procés de planificació d'una operació quirúrgica ``Com em veuré després de la cirurgia?''. La solució proposada per Crisalix està basada en la tecnologia d'imatge 3D. Aquesta tecnologia genera la reconstrucció 3D de la zona del pacient operada, seguit de la possibilitat de crear múltiples simulacions obtenint la representació dels possibles resultats de la cirurgia. Aquesta tesi presenta un sistema capaç de reconstruir cares i pits de pacients de cirurgia plàstica a partir de fotos 2D i escanegis. La reconstrucció en 3D d'un objecte és un problema complicat degut a la presència d'ambigüitats. Els mètodes basats en models estadístics son adequats per mitigar-les. En aquest treball, hem seguit la intuïció de maximitzar l'ús d'informació prèvia, introduint-la al model estadístic per millorar les seves propietats. En primer lloc, explorem els Active Shape Models (ASM) que són un conegut mètode fet servir per alinear contorns d'objectes 2D. No obstant, un cop aplicades les correccions de forma del model estadístic, es difícil de mantenir informació de la que es disposava a priori (per exemple, un petit conjunt de punts donat) inalterada. Proposem una nova projecció ponderada amb un terme de regularització, que permet obtenir formes que compleixen les restriccions de forma imposades i alhora són plausibles en concordança amb el model estadístic. En segon lloc, ampliem la metodologia per aplicar-la als anomenats 3D Morphable Models (3DMM) que són un mètode extensivament utilitzat per fer reconstrucció 3D. No obstant, els mètodes de 3DMM existents presenten algunes limitacions. Alguns estan basats en optimitzacions no lineals, computacionalment costoses i que poden quedar atrapades en mínims locals. Una altra limitació, és que no tots el mètodes proporcionen la resolució adequada per representar amb precisió els detalls de l'anatomia. Donat l'ús mèdic de l'aplicació, la precisió i la robustesa són factors molt importants a tenir en compte. Mostrem com la inicialització i l'ajustament de 3DMM poden ser millorats fent servir la projecció ponderada amb regularització proposada. Finalment, es presenta un sistema capaç de reconstruir models 3D de pacients de cirurgia plàstica a partir de dos possibles tipus de dades: imatges 2D i escaneigs en 3D. El nostre mètode es fa servir en diverses etapes del procés de reconstrucció: alineament de formes en imatge, la inicialització i l'ajustament de 3DMM. Els mètodes desenvolupats han estat integrats a l'entorn de producció de Crisalix provant la seva validesa.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

PACIFICO, CLAUDIA. "Comparison of propensity score based methods for estimating marginal hazard ratios with composite unweighted and weighted endpoints: simulation study and application to hepatocellular carcinoma". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/306601.

Texto completo
Resumen
Introduzione La mia attività di ricerca si propone di utilizzare i dati dello studio HERCOLES, uno studio retrospettivo sull’epatocarcinoma, come esempio applicativo per il confronto di metodi statistici per la stima dell’effetto marginale di un certo trattamento su endpoint di sopravvivenza standard (non pesati) ed endpoint compositi pesati. Quest’ultimo approccio, non ancora esplorato, è motivato dalla necessità di tenere conto della diversa rilevanza clinica degli eventi causa-specifici. In particolare, la morte è considerata l'evento peggiore ma una rilevanza maggiore è data anche alla recidiva locale rispetto a quella non locale. Per confrontare la performance statistiche di tali metodi sono stati sviluppati due protocolli di simulazioni. Metodi Per rimuovere o ridurre l'effetto dei confondenti (caratteristiche del soggetto e da altri fattori al basale che determinano differenze sistematiche tra i gruppi di trattamento) al fine di quantificare un effetto marginale, è necessario l’utilizzo di metodi statistici appropriati, basati sul Propensity Score (PS):la probabilità che un soggetto sia assegnato ad un trattamento condizionatamente alle covariate misurate al basale. Nella mia tesi ho considerato alcuni tra i metodi disponibili in letteratura basati sul PS (Austin 2013): - PS come covariata con trasformazione spline - PS come covariata categorica stratificata rispetto ai quantili - Appaiamento per PS - Inverse probability weighting (IPW) L’effetto marginale dell’endpoint composito non pesato è misurato in termini di hazard ratio (HR) marginale stimato tramite un modello di Cox. Per quanto riguarda l’endpoint composito pesato, lo stimatore dell’effetto del trattamento è lo stimatore non-parametrico del rapporto tra hazard cumulativi proposto da Ozga e Rauch (2019). Protocollo simulazioni Il meccanismo di generazione dei dati è simile per entrambi gli studi di simulazione. In entrambi i protocolli di simulazione, Il meccanismo di generazione dei dati è simile a quello utilizzato da Austin (2013). Nello specifico, per quanto riguarda l’endpoint non pesato (DFS), ho simulato tre scenari considerando rispettivamente tre valori per l'HR marginale: HR=1 (scenario a); HR=1.5 (scenario b) and HR=2 (scenario c). In ogni scenario ho simulato 10.000 set di dati composti da 1.000 soggetti e per la stima del PS ho generato 12 confondenti. Lo studio di simulazione per l’endpoint pesato prevede gli stessi scenari (a,b,c) combinati con tre tipologie di pesi per i due endpoints singoli: (w1,w2)=(1,1); (w1,w2)=(1,0.5); (w1,w2)=(1,0.8). In ogni scenario ho simulato 1.000 set di dati composti da 1.000 soggetti e per la stima del PS ho generato 3 confondenti. Inoltre ho considerato solo i due metodi considerati in letteratura i più robusti: IPW e appaiamento per PS (Austin 2016). Risultati I risultati relativi all’endpoint composito non pesato confermano quanto già noto in letteratura: l’IPW è il metodo basato su PS più robusto, seguito dall’appaiamento per PS. L’aspetto innovativo della mia tesi riguarda l’implementazione di studi di simulazione per la valutazione della performance dei metodi basati sul PS nello stimare l’effetto marginale di un certo trattamento rispetto ad un endpoint di sopravvivenza composito pesato: l’IPW si conferma il metodo più accurato e preciso.
Introduction My research activity aims to use the data from the HERCOLES study, a retrospective study on hepatocarcinoma, as an application example for the comparison of statistical methods for estimating the marginal effect of a certain treatment on standard survival endpoints (unweighted) and weighted composite endpoints. This last approach, unexplored to date, is motivated by the need to take into account the different clinical relevance of cause-specific events. In particular, death is considered the worst event but a greater relevance is also given to local recurrence compared to non-local one. To evaluate the statistical performance of these methods, two simulation protocols were developed. Methods To remove or reduce the effect of confounders (characteristics of the subject and other baseline factors that determine systematic differences between treatment groups) in order to quantify a marginal effect, it is necessary to use appropriate statistical methods, based on the Propensity Score (PS): the probability that a subject is assigned to a treatment conditional on the covariates measured at baseline. In my thesis I considered some of the PS-based methods available in literature (Austin 2013): - PS as a covariate with spline transformation - PS as a stratified categorical covariate with respect to quantiles - Pairing for PS - Inverse probability weighting (IPW) The marginal effect of the unweighted composite endpoint is measured in terms of marginal hazard ratio (HR) estimated using a Cox model. As regards the weighted composite endpoint, the estimator of the treatment effect is the non-parametric estimator of the ratio between cumulative hazards proposed by Ozga and Rauch (2019). Simulation protocol The data generation mechanism is similar for both simulation studies. In both simulation protocols, the data generation mechanism is similar to that used by Austin (2013). Specifically, with regard to the unweighted endpoint (Disease Free Survival), I simulated three scenarios by considering respectively three values for the marginal HR: HR=1 (scenario a); HR=1.5 (scenario b) and HR=2 (scenario c). In each scenario, I simulated 10,000 datasets consisting of 1,000 subjects and for the estimate of the PS I generated 12 confounders. The simulation study for the weighted endpoint provides for the same scenarios (a, b, c) combined with three types of weights for the two single endpoints: (w1,w2)=(1,1); (w1,w2)=(1,0.5); (w1,w2)=(1,0.8). In each scenario I simulated 1,000 data sets consisting of 1,000 subjects and for the estimate of the PS I generated 3 confounders. Furthermore, I considered only the two methods considered in the literature to be the most robust: IPW and PS pairing (Austin 2016). Results The results relating to the unweighted composite endpoint confirm what is already known in the literature: IPW is the most robust method based on PS, followed by matching for PS. The innovative aspect of my thesis concerns the implementation of simulation studies for the evaluation of the performance of PS-based methods in estimating the marginal effect of a certain treatment with respect to a weighted composite survival endpoint: the IPW is confirmed as the most accurate and precise method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Fraga, Guilherme Crivelli. "Análise da influência das propriedades radiativas de um meio participante na interação turbulência-radiação em um escoamento interno não reativo". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142495.

Texto completo
Resumen
A interação turbulência-radiação (TRI, do inglês Turbulence-Radiation Interaction) resulta do acoplamento altamente não linear entre flutuações da intensidade de radiação e flutuações da temperatura e da composição química do meio, e tem-se demonstrado experimentalmente, teoricamente e numericamente que este é um fenômeno relevante em diversas aplicações envolvendo altas temperaturas, especialmente em problemas reativos. Neste trabalho, o TRI é analisado em um escoamento interno não reativo de um gás participante que se desenvolve em um duto de seção transversal quadrada, para diferentes intensidades de turbulência do escoamento e considerando duas espécies distintas para a composição do fluido de trabalho (dióxido de carbono e vapor de água). O objetivo central é avaliar como a inclusão ou não da variação espectral das propriedades radiativas do meio no cálculo influencia a magnitude do TRI. Isso é feito através de simulações numéricas no código de dinâmica dos fluidos computacional Fire Dynamics Simulator (FDS), que resolve, através do método dos volumes finitos, as equações fundamentais que regem o problema – isto é, os balanços de massa, de quantidade de movimento e de energia e a equação de estado – em uma formulação adequada para baixos números de Mach, utilizando um algoritmo de solução explícito e de segunda ordem no tempo e no espaço. A turbulência é modelada através da simulação de grandes escalas (LES, do inglês Large Eddy Simulation), empregando-se o modelo de Smagorinsky dinâmico para o fechamento dos termos submalha; para a radiação térmica, o método dos volumes finitos é utilizado na discretização da equação da transferência radiativa e os modelos do gás cinza e da soma-ponderada-de-gases-cinza (WSGG, do inglês Weighted-Sum-of-Gray-Gases) são implementados como forma de desconsiderar e de incluir a dependência espectral das propriedades radiativas, respectivamente. A magnitude do TRI sobre o problema é avaliada através de diferenças entre as médias temporais dos fluxos de calor superficiais e do termo fonte radiativo obtidas em cálculos que consideram os efeitos do fenômeno e cálculos que os negligenciam. Em geral, a interação turbulência-radiação mostrou ser pouco importante em todos os casos considerados, o que concorda com resultados de outros estudos sobre o tema em escoamento não reativos. Com o modelo WSGG, as contribuições do fenômeno foram maiores do que com a hipótese do gás cinza, evidenciando que a inclusão da variação espectral na solução do problema radiativo tem um impacto sobre a magnitude dos efeitos do TRI. Além disso, é feita uma discussão, em parte inédita no contexto do TRI, sobre diferentes metodologias para a análise do fenômeno. Finalmente, é proposto um fator de correção para o termo fonte radiativo médio no modelo WSGG, que é validado através de sua implementação nos casos simulados. Em estudos futuros, uma análise de sensibilidade sobre os termos constituintes desse fator de correção pode levar a um melhor entendimento de como as flutuações de temperatura se correlacionam com o fenômeno da interação turbulência-radiação.
Turbulence-radiation interaction (TRI) results from the highly non-linear coupling between fluctuations of radiation intensity and fluctuations of temperature and chemical composition of the medium, and its relevance in a number of high-temperature problems, especially when chemical reactions are included, has been demonstrated experimentally, theoretically, and numerically. In the present study, the TRI is analyzed in a channel flow of a non-reactive participating gas for different turbulence intensities of the flow at the inlet and considering two distinct species for the medium composition (carbon dioxide and water vapor). The central objective is to evaluate how the inclusion or not of the spectral variation of the radiative properties of a participating gas in the radiative transfer calculations affects the turbulence-radiation interaction. With this purpose, numerical simulations are performed using the computational fluid dynamics Fortranbased code Fire Dynamics Simulator, that employs the finite volume method to solve a form of the fundamental equations – i.e., the mass, momentum and energy balances and the state equation – appropriate for low Mach number flows, through an explicit second-order (both in time and in space) core algorithm. Turbulence is modeled by the large eddy simulation approach (LES), using the dynamic Smagorinsky model to close the subgrid-scale terms; for the thermal radiation part of the problem, the finite volume method is used for the discretization of the radiative transfer equation and the gray gas and weighted-sum-of-gray-gases (WSGG) models are implemented as a way to omit and consider the spectral dependence of the radiative properties, respectively. The TRI magnitude in the problem is evaluated by differences between values for the time-averaged heat fluxes at the wall (convective and radiative) and for the time-averaged radiative heat source calculated accounting for and neglecting the turbulence-radiation interaction effects. In general, TRI had little importance over all the considered cases, a conclusion that agrees with results of previous studies. When using the WSGG model, the contributions of the phenomenon were greater that with the gray gas hypothesis, demonstrating that the inclusion of the spectral variance in the solution of the radiative problem has an impact in the TRI effects. Furthermore, this paper presents a discussion, partly unprecedented in the context of the turbulence-radiation interaction, about the different methodologies that can be used for the TRI analysis. Finally, a correction factor is proposed for the time-averaged radiative heat source in the WSGG model, which is then validated by its implementation in the simulated cases. In future studies, a sensibility analysis on the terms that compose this factor can lead to a better understanding of how fluctuations of temperature correlate with the turbulence-radiation interaction phenomenon.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Luo, Hao. "Some Aspects on Confirmatory Factor Analysis of Ordinal Variables and Generating Non-normal Data". Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-149423.

Texto completo
Resumen
This thesis, which consists of five papers, is concerned with various aspects of confirmatory factor analysis (CFA) of ordinal variables and the generation of non-normal data. The first paper studies the performances of different estimation methods used in CFA when ordinal data are encountered.  To take ordinality into account the four estimation methods, i.e., maximum likelihood (ML), unweighted least squares, diagonally weighted least squares, and weighted least squares (WLS), are used in combination with polychoric correlations. The effect of model sizes and number of categories on the parameter estimates, their standard errors, and the common chi-square measure of fit when the models are both correct and misspecified are examined. The second paper focuses on the appropriate estimator of the polychoric correlation when fitting a CFA model. A non-parametric polychoric correlation coefficient based on the discrete version of Spearman's rank correlation is proposed to contend with the situation of non-normal underlying distributions. The simulation study shows the benefits of using the non-parametric polychoric correlation under conditions of non-normality. The third paper raises the issue of simultaneous factor analysis. We study the effect of pooling multi-group data on the estimation of factor loadings. Given the same factor loadings but different factor means and correlations, we investigate how much information is lost by pooling the groups together and only estimating the combined data set using the WLS method. The parameter estimates and their standard errors are compared with results obtained by multi-group analysis using ML. The fourth paper uses a Monte Carlo simulation to assess the reliability of the Fleishman's power method under various conditions of skewness, kurtosis, and sample size. Based on the generated non-normal samples, the power of D'Agostino's (1986) normality test is studied. The fifth paper extends the evaluation of algorithms to the generation of multivariate non-normal data.  Apart from the requirement of generating reliable skewness and kurtosis, the generated data also need to possess the desired correlation matrices.  Four algorithms are investigated in terms of simplicity, generality, and reliability of the technique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Ofe, Hosea y Peter Okah. "Value at Risk: A Standard Tool in Measuring Risk : A Quantitative Study on Stock Portfolio". Thesis, Umeå universitet, Handelshögskolan vid Umeå universitet, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-45303.

Texto completo
Resumen
The role of risk management has gained momentum in recent years most notably after the recent financial crisis. This thesis uses a quantitative approach to evaluate the theory of value at risk which is considered a benchmark to measure financial risk. The thesis makes use of both parametric and non parametric approaches to evaluate the effectiveness of VAR as a standard tool in measuring risk of stock portfolio. This study uses the normal distribution, student t-distribution, historical simulation and the exponential weighted moving average at 95% and 99% confidence levels on the stock returns of Sonny Ericsson, Three Months Swedish Treasury bill (STB3M) and Nordea Bank. The evaluations of the VAR models are based on the Kupiec (1995) Test. From a general perspective, the results of the study indicate that VAR as a proxy of risk measurement has some imprecision in its estimates. However, this imprecision is not all the same for all the approaches. The results indicate that models which assume normality of return distribution display poor performance at both confidence levels than models which assume fatter tails or have leptokurtic characteristics. Another finding from the study which may be interesting is the fact that during the period of high volatility such as the financial crisis of 2008, the imprecision of VAR estimates increases. For the parametric approaches, the t-distribution VAR estimates were accurate at 95% confidence level, while normal distribution approach produced inaccurate estimates at 95% confidence level. However both approaches were unable to provide accurate estimates at 99% confidence level. For the non parametric approaches the exponentially weighted moving average outperformed the historical simulation approach at 95% confidence level, while at the 99% confidence level both approaches tend to perform equally. The results of this study thus question the reliability on VAR as a standard tool in measuring risk on stock portfolio. It also suggest that more research should be done to improve on the accuracy of VAR approaches, given that the role of risk management in today’s business environment is increasing ever than before. The study suggest VAR should be complemented with other risk measures such as Extreme value theory and stress testing, and that more than one back testing techniques should be used to test the accuracy of VAR.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Clément, Jean-Baptiste. "Simulation numérique des écoulements en milieu poreux non-saturés par une méthode de Galerkine discontinue adaptative : application aux plages sableuses". Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL0022.

Texto completo
Resumen
Les écoulements en milieux poreux non-saturés sont modélisés par l'équation de Richards qui est une équation non-linéaire parabolique dégénérée. Ses limites et les défis que soulèvent sa résolution numérique sont présentés. L'obtention de résultats robustes, précis et efficaces est difficile en particulier à cause des fronts de saturation raides et dynamiques induits par les propriétés hydrauliques non-linéaires. L'équation de Richards est discrétisée par une méthode de Galerkine discontinue en espace et des formules de différentiation rétrograde en temps. Le schéma numérique résultant est conservatif, d'ordre élevé et très flexible. Ainsi, des conditions aux limites complexes sont facilement intégrées comme la condition de suintement ou un forçage dynamique. De plus, une stratégie adaptative est proposée. Un pas de temps adaptatif rend la convergence non-linéaire robuste et un raffinement de maillage adaptatif basée sur des blocs est utilisée pour atteindre la précision requise efficacement. Un indicateur d'erreur a posteriori approprié aide le maillage à capturer les fronts de saturation raides qui sont également mieux approximés par une discontinuité introduite dans la solution grâce à une méthode de Galerkine discontinue pondérée. L'approche est validée par divers cas-tests et un benchmark 2D. Les simulations numériques sont comparées à des expériences de laboratoire de recharge/drainage de nappe et une expérience à grande échelle d'humidification, suite à la mise en eau du barrage multi-matériaux de La Verne. Ce cas exigeant montre les potentialités de la stratégie développée dans cette thèse. Enfin, des applications sont menées pour simuler les écoulements souterrains sous la zone de jet de rive de plages sableuses en comparaison avec des observations expérimentales
Flows in unsaturated porous media are modelled by the Richards' equation which is a degenerate parabolic nonlinear equation. Its limitations and the challenges raised by its numerical solution are laid out. Getting robust, accurate and cost-effective results is difficult in particular because of moving sharp wetting fronts due to the nonlinear hydraulic properties. Richards' equation is discretized by a discontinuous Galerkin method in space and backward differentiation formulas in time. The resulting numerical scheme is conservative, high-order and very flexible. Thereby, complex boundary conditions are included easily such as seepage condition or dynamic forcing. Moreover, an adaptive strategy is proposed. Adaptive time stepping makes nonlinear convergence robust and a block-based adaptive mesh refinement is used to reach required accuracy cost-effectively. A suitable a posteriori error indicator helps the mesh to capture sharp wetting fronts which are also better approximated by a discontinuity introduced in the solution thanks to a weighted discontinuous Galerkin method. The approach is checked through various test-cases and a 2D benchmark. Numerical simulations are compared with laboratory experiments of water table recharge/drainage and a largescale experiment of wetting, following reservoir impoundment of the multi-materials La Verne dam. This demanding case shows the potentiality of the strategy developed in this thesis. Finally, applications are handled to simulate groundwater flows under the swash zone of sandy beaches in comparison with experimental observations
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Daviaud, Bérangère. "Méthodes formelles pour les systèmes réactifs, applications au live coding". Electronic Thesis or Diss., Angers, 2024. http://www.theses.fr/2024ANGE0032.

Texto completo
Resumen
Le formalisme des systèmes à événements discrets et des systèmes réactifs offre un cadre abstrait efficace pour représenter et étudier de nombreux systèmes. Dans cette thèse, nous exploitons ce formalisme afin de modéliser une partition de live coding dont l’interprétation est conditionnée par l’occurrence d’événements spécifiques. Cette approche nous a conduits à étudier les méthodes formelles des systèmes à événements discrets qui permettent de les modéliser, de les analyser et de concevoir des stratégies de contrôle adaptées. Cette étude a donné lieu à plusieurs contributions, notamment en ce qui concerne l’expressivité des automates pondérés, la vérification formelle de propriétés temporelles, et sur l’existence d’une simulation pondérée. La dernière partie de ce mémoire présente le formalisme de la partition interactive ainsi que de la librairie Troop Interactive, développée pour rendre accessible l’écriture de partition interactive et la réalisation de performances sonores interactives reposant sur la pratique du live coding
The formalism of discrete event systems and reactive systems provides an effective abstract framework for representing and studying a wide range of systems. In this thesis, we leverage this formalism to model a live coding score whose interpretation is conditioned by the occurrence of specific events. This approach led us to investigate formal methods for discrete event systems that enable their modeling, analysis, and the design of appropriate control strategies. This study resulted in several contributions, particularly regarding the expressiveness of weighted automata, the formal verification of temporal properties, and the existence of weighted simulation. The final part of this dissertation introduces the formalism of the interactive score, as well as the \textit{Troop Interactive} library, developed to make interactive score writing and the realization of interactive sound performances based on live coding practices more accessible
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Liu, Chunde. "Creation of hot summer years and evaluation of overheating risk at a high spatial resolution under a changing climate". Thesis, University of Bath, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.725405.

Texto completo
Resumen
It is believed that the extremely hot European summer in 2003, where tens of thousands died in buildings, will become the norm by the 2040s, and hence there is the urgent need to accurately assess the risk that buildings pose. Thermal simulations based on warmer than typical years will be key to this. Unfortunately, the existing warmer than typical years, such as probabilistic Design Summer Years (pDSYs) are not robust measures due to their simple selection method, and can even be cooler than typical years. This study developed two new summer reference years: one (pHSY-1) is suitable for assessing the occurrence and severity of overheating while the other (pHSY-2) is appropriate for evaluating the thermal stress. Both have been proven to be more robust than the pDSYs. In addition, this study investigated the spatial variation in overheating driven by variability in building characteristics and the local environment. This variation had been ignored by previous studies, as most of them either created thermal models using building archetypes with little or no concern about the influence of local shading, or assumed little variation in climate across a landscape. For the first time, approximately a thousand more accurate thermal models were created for a UK city based on the remote measurement including building characteristics and their local shading. By producing overheating and mortality maps this study found that spatial variation in the risk of overheating was considerably higher due to the variability of vernacular forms, contexts and climates than previously thought, and that the heat-related mortality will be tripled by the 2050s if no building and human thermal adaptations are taken. Such maps would be useful to Governments when making cost-effective adaptation strategies against a warming climate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Massire, Aurélien. "Non-selective Refocusing Pulse Design in Parallel Transmission for Magnetic Resonance Imaging of the Human Brain at Ultra High Field". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112180/document.

Texto completo
Resumen
En Imagerie par Résonance Magnétique (IRM), l’augmentation du champ magnétique statique permet en théorie de fournir un rapport signal sur bruit accru, améliorant la qualité des images. L’objectif de l’IRM à ultra haut champ est d’atteindre une résolution spatiale suffisamment haute pour pouvoir distinguer des structures si fines qu’elles sont actuellement impossibles à visualiser de façon non-invasive. Cependant, à de telles valeurs de champs magnétiques, la longueur d’onde du rayonnement électromagnétique envoyé pour basculer les spins des protons de l’eau est du même ordre de grandeur que l’objet dont on souhaite faire l’image. Des phénomènes d’interférences sont observés, ce qui se traduit par l’inhomogénéité de ce champ radiofréquence (RF) au sein de l’objet. Ces interférences engendrent des artefacts de signal et/ou de contraste dans les images IRM, et rendent ainsi leur exploitation délicate. Il est donc crucial de fournir des solutions pour atténuer la non-uniformité de l’excitation des spins, à défaut de quoi de tels systèmes ne pourront atteindre leurs pleins potentiels. Pour obtenir des diagnostics pertinents à très haut champ, il est donc nécessaire de créer des impulsions RF homogénéisant l'excitation de l'ensemble des spins (ici du cerveau humain), optimisées pour chaque individu. Pour cela, un système de transmission parallèle (pTX) à 8 canaux a été installé au sein de notre imageur à 7 Tesla. Alors que la plupart des systèmes IRM cliniques n’utilisent qu’un seul canal d’émission, l’extension pTX permet de jouer différentes formes d’impulsions RF de concert. La somme résultante de ces interférences doit alors être optimisée pour atténuer la non-uniformité observée classiquement. L’objectif de cette thèse est donc de synthétiser ce type d’impulsions, en utilisant la pTX. Ces impulsions auront pour contrainte supplémentaire le respect des limitations internationales concernant l'exposition à des champs radiofréquence, qui induit une hausse de température dans les tissus. En ce sens, de nombreuses simulations électromagnétiques et de températures ont été réalisées en introduction de cette thèse, afin d’évaluer la relation entre les seuils recommandés d’exposition RF et l’élévation de température prédite dans les tissus. Cette thèse porte plus spécifiquement sur la conception de l’ensemble des impulsions RF refocalisantes utilisées dans des séquences IRM non-sélectives, basées sur l’écho de spin. Dans un premier temps, seule une impulsion RF a été générée, pour une application simple : l’inversion du déphasage des spins dans le plan transverse. Dans un deuxième temps, sont considérées les séquences à long train d’échos de refocalisation appliquées à l’in vivo. Ici, l’opérateur mathématique agissant sur la magnétisation, et non pas son état final comme il est fait classiquement, est optimisé. Le gain en imagerie à très haut champ est clairement visible puisque les opérations mathématiques (la rotation des spins) voulues sont réalisées avec plus de fidélité que dans le cadre des méthodes de l’état de l’art. Pour cela, la génération de ces impulsions RF combine une méthode d’excitation des spins avec navigation dans l’espace de Fourier, les kT-points, et un algorithme d’optimisation, appelé Gradient Ascent Pulse Engineering (GRAPE), utilisant le contrôle optimal. Cette conception est rapide grâce à des calculs analytiques plus directs que des méthodes de différences finies. La prise en compte d’un grand nombre de paramètres nécessite l’usage de GPUs (Graphics Processing Units) pour atteindre des temps de calcul compatibles avec un examen clinique. Cette méthode de conception d’impulsions RF a été validée expérimentalement sur l’imageur 7 Tesla de NeuroSpin, sur une cohorte de volontaires sains
In Magnetic Resonance Imaging (MRI), the increase of the static magnetic field strength is used to provide in theory a higher signal-to-noise ratio, thereby improving the overall image quality. The purpose of ultra-high-field MRI is to achieve a spatial image resolution sufficiently high to be able to distinguish structures so fine that they are currently impossible to view in a non-invasive manner. However, at such static magnetic fields strengths, the wavelength of the electromagnetic waves sent to flip the water proton spins is of the same order of magnitude than the scanned object. Interference wave phenomena are then observed, which are caused by the radiofrequency (RF) field inhomogeneity within the object. These generate signal and/or contrast artifacts in MR images, making their exploitation difficult, if not impossible, in certain areas of the body. It is therefore crucial to provide solutions to mitigate the non-uniformity of the spins excitation. Failing this, these imaging systems with very high fields will not reach their full potential.For relevant high field clinical diagnosis, it is therefore necessary to create RF pulses homogenizing the excitation of all spins (here of the human brain), and optimized for each individual to be imaged. For this, an 8-channel parallel transmission system (pTX) was installed in our 7 Tesla scanner. While most clinical MRI systems only use a single transmission channel, the pTX extension allows to simultaneously playing various forms of RF pulses on all channels. The resulting sum of the interference must be optimized in order to reduce the non-uniformity typically seen.The objective of this thesis is to synthesize this type of tailored RF pulses, using parallel transmission. These pulses will have as an additional constraint the compliance with the international exposure limits for radiofrequency exposure, which induces a temperature rise in the tissue. In this sense, many electromagnetic and temperature simulations were carried out as an introduction of this thesis, in order to assess the relationship between the recommended RF exposure limits and the temperature rise actually predicted in tissues.This thesis focuses specifically on the design of all RF refocusing pulses used in non-selective MRI sequences based on the spin-echo. Initially, only one RF pulse was generated for a simple application: the reversal of spin dephasing in the transverse plane, as part of a classic spin echo sequence. In a second time, sequences with very long refocusing echo train applied to in vivo imaging are considered. In all cases, the mathematical operator acting on the magnetization, and not its final state as is done conventionally, is optimized. The gain in high field imaging is clearly visible, as the necessary mathematical operations (that is to say, the rotation of the spins) are performed with a much greater fidelity than with the methods of the state of the art. For this, the generation of RF pulses is combining a k-space-based spin excitation method, the kT-points, and an optimization algorithm, called Gradient Ascent Pulse Engineering (GRAPE), using optimal control.This design is relatively fast thanks to analytical calculations rather than finite difference methods. The inclusion of a large number of parameters requires the use of GPUs (Graphics Processing Units) to achieve computation times compatible with clinical examinations. This method of designing RF pulses has been experimentally validated successfully on the NeuroSpin 7 Tesla scanner, with a cohort of healthy volunteers. An imaging protocol was developed to assess the image quality improvement using these RF pulses compared to typically used non-optimized RF pulses. All methodological developments made during this thesis have contributed to improve the performance of ultra-high-field MRI in NeuroSpin, while increasing the number of MRI sequences compatible with parallel transmission
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Parmar, Rajbir Singh. "Simulation of weight gain and feed consumption of turkeys". Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54257.

Texto completo
Resumen
Like most agricultural production systems, effective decision making in turkey production systems requires the prediction of future status of the system and evaluation of alternative management policies. A simulation model of a turkey production system was developed to predict values of flock performance indicators of significant economic importance, namely body weight and feed consumption. Existing weather simulation models were combined and modified in order to develop a model that predicted daily dry-bulb temperature and humidity ratio outside the turkey house. The weather simulation model was validated using twenty years of daily observed weather data from Roanoke, Virginia. Thermal environment inside the turkey house was predicted from simulated outdoor weather using energy and mass balance equations. House environment prediction part of the model was validated using observed inside and outside temperature data collected at a turkey farm in Virginia. A discrete event simulation model was developed to simulate the effects of house thermal environment, feed energy, sex, and age on weight gain and feed consumption of growing turkeys. The model was validated using temperature, body weight, and feed consumption data collected at a turkey farm in Virginia. The observed average bird weights at marketing age were within 95% confidence intervals of the predicted values. However, the model underpredicted energy consumption values. The sensitivity of the model to variations in R-value, ventilation rate, and feed energy concentration was evaluated. The model was more sensitive to feed energy concentration.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Silva, Wesley Bertoli da. "Distribuição de Poisson bivariada aplicada à previsão de resultados esportivos". Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/4586.

Texto completo
Resumen
Made available in DSpace on 2016-06-02T20:06:10Z (GMT). No. of bitstreams: 1 6128.pdf: 965623 bytes, checksum: 08d957ba051c6348918f8348a857eff7 (MD5) Previous issue date: 2014-04-23
Financiadora de Estudos e Projetos
The modelling of paired counts data is a topic that has been frequently discussed in several threads of research. In particular, we can cite bivariate counts, such as the analysis of sports scores. As a result, in this work we present the bivariate Poisson distribution to modelling positively correlated scores. The possible independence between counts is also addressed through the double Poisson model, which arises as a special case of the bivariate Poisson model. The main characteristics and properties of these models are presented and a simulation study is conducted to evaluate the behavior of the estimates for different sample sizes. Considering the possibility of modeling parameters by insertion of predictor variables, we present the structure of the bivariate Poisson regression model as a general case as well as the structure of an effects model for application in sports data. Particularly, in this work we will consider applications to Brazilian Championship Serie A 2012 data, in which the effects will be estimated by double Poisson and bivariate Poisson models. Once obtained the fits, the probabilities of scores occurence are estimated and then we obtain forecasts for the outcomes. In order to obtain more accurate forecasts, we present the weighted likelihood method from which it will be possible to quantify the relevance of the data according to the time they were observed.
A modelagem de dados provenientes de contagens pareadas e um típico que vem sendo frequentemente abordado em diversos segmentos de pesquisa. Em particular, podemos citar os casos em que as contagens de interesse são bivariadas, como por exemplo na analise de placares esportivos. Em virtude disso, neste trabalho apresentamos a distribuição Poisson bivariada para os casos em que as contagens de interesse sao positivamente correlacionadas. A possível independencia entre as contagens tambem e abordada por meio do modelo Poisson duplo, que surge como caso particular do modelo Poisson bivariado. As principais características e propriedades desses modelos são apresentadas e um estudo de simulação é realizado, visando avaliar o comportamento das estimativas para diferentes tamanhos amostrais. Considerando a possibilidade de se modelar os parâmetros por meio da inserçao de variáveis preditoras, apresentamos a estrutura do modelo de regressão Poisson bivariado como caso geral, bem como a estrutura de um modelo de efeitos para aplicação a dados esportivos. Particularmente, neste trabalho vamos considerar aplicações aos dados da Serie A do Campeonato Brasileiro de 2012, na qual os efeitos serão estimados por meio dos modelos Poisson duplo e Poisson bivariado. Uma vez obtidos os ajustes, estimam-se as probabilidades de ocorrência dos placares e, a partir destas, obtemos previsões para as partidas de interesse. Com o intuito de se obter previsões mais acuradas para as partidas, apresentamos o metodo da verossimilhança ponderada, a partir do qual seria possível quantificar a relevância dos dados em função do tempo em que estes foram observados.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Watanabe, Alexandre Hiroshi. "Comparações de populações discretas". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11062013-095657/.

Texto completo
Resumen
Um dos principais problemas em testes de hipóteses para a homogeneidade de curvas de sobrevivência ocorre quando as taxas de falha (ou funções de intensidade) não são proporcionais. Apesar do teste de Log-rank ser o teste não paramétrico mais utilizado para se comparar duas ou mais populações sujeitas a dados censurados, este teste apresentada duas restrições. Primeiro, toda a teoria assintótica envolvida com o teste de Log-rank, tem como hipótese o fato das populações envolvidas terem distribuições contínuas ou no máximo mistas. Segundo, o teste de Log-rank não apresenta bom comportamento quando as funções intensidade cruzam. O ponto inicial para análise consiste em assumir que os dados são contínuos e neste caso processos Gaussianos apropriados podem ser utilizados para testar a hipótese de homogeneidade. Aqui, citamos o teste de Renyi e Cramér-von Mises para dados contínuos (CCVM), ver Klein e Moeschberger (1997) [15]. Apesar destes testes não paramétricos apresentar bons resultados para dados contínuos, esses podem ter problemas para dados discretos ou arredondados. Neste trabalho, fazemos um estudo simulação da estatística de Cramér von-Mises (CVM) proposto por Leão e Ohashi [16], que nos permite detectar taxas de falha não proporcionais (cruzamento das taxas de falha) sujeitas a censuras arbitrárias para dados discretos ou arredondados. Propomos também, uma modificação no teste de Log-rank clássico para dados dispostos em uma tabela de contingência. Ao aplicarmos as estatísticas propostas neste trabalho para dados discretos ou arredondados, o teste desenvolvido apresenta uma função poder melhor do que os testes usuais
One of the main problems in hypothesis testing for homogeneity of survival curves occurs when the failure rate (or intensity functions) are nonproportional. Although the Log-rank test is a nonparametric test most commonly used to compare two or more populations subject to censored data, this test presented two constraints. First, all the asymptotic theory involved with the Log-rank test, is the hypothesis that individuals and populations involved have continuous distributions or at best mixed. Second, the log-rank test does not show well when the intensity functions intersect. The starting point for the analysis is to assume that the data is continuous and in this case suitable Gaussian processes may be used to test the assumption of homogeneity. Here, we cite the Renyi test and Cramér-von Mises for continuous data (CCVM), and Moeschberger see Klein (1997) [15]. Despite these non-parametric tests show good results for continuous data, these may have trouble discrete data or rounded. In this work, we perform a simulation study of statistic Cramér-von Mises (CVM) proposed by Leão and Ohashi [16], which allows us to detect failure rates are nonproportional (crossing of failure rates) subject to censure for arbitrary data discrete or rounded. We also propose a modification of the test log-rank classic data arranged in a contingency table. By applying the statistics proposed in this paper for discrete or rounded data, developed the test shows a power function better than the usual testing
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lindberg, Mattias y Peter Guban. "Auxiliary variables a weight against nonresponse bias : A simulation study". Thesis, Stockholms universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-142977.

Texto completo
Resumen
Today’s surveys face a growing problem with increasing nonresponse.  The increase in nonresponse rate causes a need for better and more effective ways to reduce the nonresponse bias.  There are three major scientific orientation of today’s research dealing with nonresponse. One is examining the social factors, the second one studies different data collection methods and the third investigating the use of weights to adjust estimators for nonresponse.  We would like to contribute to the third orientation by evaluating estimators which use and adjust weights based on auxiliary variables to balance the survey nonresponse through simulations. For the simulation we use an artificial population consisting of 35455 participants from the Representativity Indicators for Survey Quality project. We model three nonresponse mechanisms (MCAR, MAR and MNAR) with three different coefficient of determination s between our study variable and the auxiliary variables and under three response rates resulting in 63 simulation scenarios. The scenarios are replicated 1000 times to acquire the results. We outline our findings and results for each estimator in all scenarios with the help of bias measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Rui, Yikang. "Urban Growth Modeling Based on Land-use Changes and Road Network Expansion". Doctoral thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122182.

Texto completo
Resumen
A city is considered as a complex system. It consists of numerous interactivesub-systems and is affected by diverse factors including governmental landpolicies, population growth, transportation infrastructure, and market behavior.Land use and transportation systems are considered as the two most importantsubsystems determining urban form and structure in the long term. Meanwhile,urban growth is one of the most important topics in urban studies, and its maindriving forces are population growth and transportation development. Modelingand simulation are believed to be powerful tools to explore the mechanisms ofurban evolution and provide planning support in growth management. The overall objective of the thesis is to analyze and model urban growth basedon the simulation of land-use changes and the modeling of road networkexpansion. Since most previous urban growth models apply fixed transportnetworks, the evolution of road networks was particularly modeled. Besides,urban growth modeling is an interdisciplinary field, so this thesis made bigefforts to integrate knowledge and methods from other scientific and technicalareas to advance geographical information science, especially the aspects ofnetwork analysis and modeling. A multi-agent system was applied to model urban growth in Toronto whenpopulation growth is considered as being the main driving factor of urbangrowth. Agents were adopted to simulate different types of interactiveindividuals who promote urban expansion. The multi-agent model with spatiotemporalallocation criterions was shown effectiveness in simulation. Then, anurban growth model for long-term simulation was developed by integratingland-use development with procedural road network modeling. The dynamicidealized traffic flow estimated by the space syntax metric was not only used forselecting major roads, but also for calculating accessibility in land-usesimulation. The model was applied in the city centre of Stockholm andconfirmed the reciprocal influence between land use and street network duringthe long-term growth. To further study network growth modeling, a novel weighted network model,involving nonlinear growth and neighboring connections, was built from theperspective of promising complex networks. Both mathematical analysis andnumerical simulation were examined in the evolution process, and the effects ofneighboring connections were particular investigated to study the preferentialattachment mechanisms in the evolution. Since road network is a weightedplanar graph, the growth model for urban street networks was subsequentlymodeled. It succeeded in reproducing diverse patterns and each pattern wasexamined by a series of measures. The similarity between the properties of derived patterns and empirical studies implies that there is a universal growthmechanism in the evolution of urban morphology. To better understand the complicated relationship between land use and roadnetwork, centrality indices from different aspects were fully analyzed in a casestudy over Stockholm. The correlation coefficients between different land-usetypes and road network centralities suggest that various centrality indices,reflecting human activities in different ways, can capture land development andconsequently influence urban structure. The strength of this thesis lies in its interdisciplinary approaches to analyze andmodel urban growth. The integration of ‘bottom-up’ land-use simulation androad network growth model in urban growth simulation is the major contribution.The road network growth model in terms of complex network science is anothercontribution to advance spatial network modeling within the field of GIScience.The works in this thesis vary from a novel theoretical weighted network modelto the particular models of land use, urban street network and hybrid urbangrowth, and to the specific applications and statistical analysis in real cases.These models help to improve our understanding of urban growth phenomenaand urban morphological evolution through long-term simulations. Thesimulation results can further support urban planning and growth management.The study of hybrid models integrating methods and techniques frommultidisciplinary fields has attracted a lot attention and still needs constantefforts in near future.

QC 20130514

Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Medhekar, Vinay Shantaram. "Modeling and simulation of oxidative degradation of Ultra-High Molecular Weight Polyethylene (UHMWPE)". Link to electronic thesis, 2001. http://www.wpi.edu/Pubs/ETD/Available/etd-0828101-135959.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gassama, Malamine. "Estimation du risque attribuable et de la fraction préventive dans les études de cohorte". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV131/document.

Texto completo
Resumen
Le risque attribuable (RA) mesure la proportion de cas de maladie qui peuvent être attribués à une exposition au niveau de la population. Plusieurs définitions et méthodes d'estimation du RA ont été proposées pour des données de survie. En utilisant des simulations, nous comparons quatre méthodes d'estimation du RA dans le contexte de l'analyse de survie : deux méthodes non paramétriques basées sur l'estimateur de Kaplan-Meier, une méthode semi-paramétrique basée sur le modèle de Cox à risques proportionnels et une méthode paramétrique basée sur un modèle à risques proportionnels avec un risque de base constant par morceaux. Nos travaux suggèrent d'utiliser les approches semi-paramétrique et paramétrique pour l'estimation du RA lorsque l'hypothèse des risques proportionnels est vérifiée. Nous appliquons nos méthodes aux données de la cohorte E3N pour estimer la proportion de cas de cancer du sein invasif attribuables à l'utilisation de traitements hormonaux de la ménopause (THM). Nous estimons qu'environ 9 % des cas de cancer du sein sont attribuables à l'utilisation des THM à l'inclusion. Dans le cas d'une exposition protectrice, une alternative au RA est la fraction préventive (FP) qui mesure la proportion de cas de maladie évités. Cette mesure n'a pas été considérée dans le contexte de l'analyse de survie. Nous proposons une définition de la FP dans ce contexte et des méthodes d'estimation en utilisant des approches semi-paramétrique et paramétrique avec une extension permettant de prendre en compte les risques concurrents. L'application aux données de la cohorte des Trois Cités (3C) estime qu'environ 9 % de cas d'accident vasculaire cérébral peuvent être évités chez les personnes âgées par l'utilisation des hypolipémiants. Notre étude montre que la FP peut être utilisée pour évaluer l'impact des médicaments bénéfiques dans les études de cohorte tout en tenant compte des facteurs de confusion potentiels et des risques concurrents
The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model. Our results suggest to use the semiparametric or parametric approaches to estimate AR if the proportional hazards assumption appears appropriate. These methods were applied to the E3N women cohort data to estimate the AR of breast cancer due to menopausal hormone therapy (MHT). We showed that about 9% of cases of breast cancer were attributable to MHT use at baseline. In case of a protective exposure, an alternative to the AR is the prevented fraction (PF) which measures the proportion of disease cases that could be avoided in the presence of a protective exposure in the population. The definition and estimation of PF have never been considered for cohort studies in the survival analysis context. We defined the PF in cohort studies with survival data and proposed two estimation methods: a semiparametric method based on Cox’s proportional hazards model and a parametric method based on a piecewise constant hazards model with an extension to competing risks. Using data of the Three-City (3C) cohort study, we found that approximately 9% of cases of stroke could be avoided using lipid-lowering drugs (statins or fibrates) in the elderly population. Our study shows that the PF can be estimated to evaluate the impact of beneficial drugs in observational cohort studies while taking potential confounding factors and competing risks into account
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Karewar, Shivraj. "Atomistic Simulations of Deformation Mechanisms in Ultra-Light Weight Mg-Li Alloys". Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801888/.

Texto completo
Resumen
Mg alloys have spurred a renewed academic and industrial interest because of their ultra-light-weight and high specific strength properties. Hexagonal close packed Mg has low deformability and a high plastic anisotropy between basal and non-basal slip systems at room temperature. Alloying with Li and other elements is believed to counter this deficiency by activating non-basal slip by reducing their nucleation stress. In this work I study how Li addition affects deformation mechanisms in Mg using atomistic simulations. In the first part, I create a reliable and transferable concentration dependent embedded atom method (CD-EAM) potential for my molecular dynamics study of deformation. This potential describes the Mg-Li phase diagram, which accurately describes the phase stability as a function of Li concentration and temperature. Also, it reproduces the heat of mixing, lattice parameters, and bulk moduli of the alloy as a function of Li concentration. Most importantly, our CD-EAM potential reproduces the variation of stacking fault energy for basal, prismatic, and pyramidal slip systems that influences the deformation mechanisms as a function of Li concentration. This success of CD-EAM Mg-Li potential in reproducing different properties, as compared to literature data, shows its reliability and transferability. Next, I use this newly created potential to study the effect of Li addition on deformation mechanisms in Mg-Li nanocrystalline (NC) alloys. Mg-Li NC alloys show basal slip, pyramidal type-I slip, tension twinning, and two-compression twinning deformation modes. Li addition reduces the plastic anisotropy between basal and non-basal slip systems by modifying the energetics of Mg-Li alloys. This causes the solid solution softening. The inverse relationship between strength and ductility therefore suggests a concomitant increase in alloy ductility. A comparison of the NC results with single crystal deformation results helps to understand the qualitative and quantitative effect of Li addition in Mg on nucleation stress and fault energies of each deformation mode. The nucleation stress and fault energies of basal dislocations and compression twins in single crystal Mg-Li alloy increase while those for pyramidal dislocations and tension twinning decrease. This variation in respective values explains the reduction in plastic anisotropy and increase in ductility for Mg-Li alloys.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

DeMarco, James P. Jr. "Mechanical characterization and numerical simulation of a light-weight aluminum A359 metal-matrix composite". Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4933.

Texto completo
Resumen
Aluminum metal-matrix composites (MMCs) are well positioned to replace steel in numerous manufactured structural components, due to their high strength-to-weight and stiffness ratios. For example, research is currently being conducted in the use of such materials in the construction of tank entry doors, which are currently made of steel and are dangerously heavy for military personnel to lift and close. However, the manufacture of aluminum MMCs is inefficient in many cases due to the loss of material through edge cracking during the hot rolling process which is applied to reduce thick billets of as-cast material to usable sheets. In the current work, mechanical characterization and numerical modeling of as-cast aluminum A359-SiCsubscript p]-30% is employed to determine the properties of the composite and identify their dependence on strain rate and temperature conditions. Tensile and torsion tests were performed at a variety of strain rates and temperatures. Data obtained from tensile tests were used to calibrate the parameters of a material model for the composite. The material model was implemented in the ANSYS finite element software suite, and simulations were performed to test the ability of the model to capture the mechanical response of the composite under simulated tension and torsion tests. A temperature- and strain rate-dependent damage model extended the constitutive model to capture the dependence of material failure on testing or service conditions. Several trends in the mechanical response were identified through analysis of the dependence of experimentally-obtained material properties on temperature and strain rate. The numerical model was found to adequately capture strain rate and temperature dependence of the stress-strain curves in most cases.; Ductility modeling allowed prediction of stress and strain conditions which would lead to rupture, as well as identification of areas of a solid model which are most likely to fail under a given set of environmental and load conditions.
ID: 030423478; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.M.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 113-118).
M.S.
Masters
Mechanical, Materials, and Aerospace Engineering
Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Edlund, Per-Olov. "Preliminary estimation of transfer function weights : a two-step regression approach". Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics [Ekonomiska forskningsinstitutet vid Handelshögsk.] (EFI), 1989. http://www.hhs.se/efi/summary/291.htm.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Al-Nsour, Rawan. "MOLECULAR DYNAMICS SIMULATIONS OF PURE POLYTETRAFLUOROETHYLENE NEAR GLASSY TRANSITION TEMPERATURE FOR DIFFERENT MOLECULAR WEIGHTS". VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3845.

Texto completo
Resumen
Fluoropolymers are employed in countless end-user applications across several industries. One such fluoropolymer is polytetrafluoroethylene. This research is concerned with studying and understanding the thermal behavior of polytetrafluoroethylene. Such understanding is critical to predict its behavior in diverse service environments as the polymer ages and for allowing bottom up design of improved polymers for specific applications. While a plethora of experiments have investigated the thermal properties of polytetrafluoroethylene, examining these properties using molecular dynamics simulations remains in its infancy. In particular, the current body of molecular dynamics research on polytetrafluoroethylene has primarily focused on studying polytetrafluoroethylene phases, its physical nature, and its helical conformational structure. The present study is the first molecular dynamics simulations research to study polytetrafluoroethylene behavior near the glassy transition temperature. Specifically, the current research utilizes molecular dynamics simulations to achieve the following objectives: (a) model and predict polytetrafluoroethylene glassy transition temperature at different molecular weights, (b) examine the impact of glassy transition temperature on the volume-temperature and thermal properties, (c) study the influence of molecular weight on polytetrafluoroethylene melt and glassy state, and (d) determine the governing forces at the molecular level that control polytetrafluoroethylene glassy transition temperature. Achieving the aforementioned objectives requires performing four major tasks. Motivated by the scarcity of polytetrafluoroethylene force fields research, the first task aims to generate and test polytetrafluoroethylene force fields. The parameters were produced based on the Optimized Potentials for Liquid Simulations All Atom model. The intramolecular parameters were generated using the automated frequency matching method while the torsional terms were fitted using the nonlinear least squares algorithm. The intermolecular partial atomic charges were obtained using Northwest Computational Chemistry software and fitted using the restrained electrostatic potential at (MP2/6-31G*) level of theory. The final set of parameter was tested by calculating polytetrafluoroethylene density using molecular dynamics simulations. The second task involves building polytetrafluoroethylene amorphous structure using molecular dynamics at periodic boundary conditions for polytetrafluoroethylene cell at different molecular weights. We use the amorphous structure in the molecular dynamics simulations in consistence with research evidence which reveals that polymer properties such as the specific volume will differ as the polymer passes the glassy transition when it is in the amorphous phase structure whereas no variation occurs when the polymer passes the glassy transition while it is in the crystalline structure. The third task includes testing polytetrafluoroethylene melt phase properties: density, specific heat, boiling point, and enthalpy of vaporization. In the fourth and final task, we performed molecular dynamics simulations using NAnoscale Molecular Dynamics program. This task involves the polymer relaxation process to predict polytetrafluoroethylene mechanical behavior around the glassy transition temperature. Properties that are affected by this transition such as density, heat capacity, volumetric thermal expansion, the specific volume, and the bulk modulus were examined and the simulated results were in good agreement with experimental findings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Laughlin, Trevor William. "A parametric and physics-based approach to structural weight estimation of the hybrid wing body aircraft". Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45829.

Texto completo
Resumen
Estimating the structural weight of a Hybrid Wing Body (HWB) aircraft during conceptual design has proven to be a significant challenge due to its unconventional configuration. Aircraft structural weight estimation is critical during the early phases of design because inaccurate estimations could result in costly design changes or jeopardize the mission requirements and thus degrade the concept's overall viability. The tools and methods typically employed for this task are inadequate since they are derived from historical data generated by decades of tube-and-wing style construction. In addition to the limited applicability of these empirical models, the conceptual design phase requires that any new tools and methods be flexible enough to enable design space exploration without consuming a significant amount of time and computational resources. This thesis addresses these challenges by developing a parametric and physics-based modeling and simulation (M&S) environment for the purpose of HWB structural weight estimation. The tools in the M&S environment are selected based on their ability to represent the unique HWB geometry and model the physical phenomena present in the centerbody section. The new M&S environment is used to identify key design parameters that significantly contribute to the variability of the HWB centerbody structural weight and also used to generate surrogate models. These surrogate models can augment traditional aircraft sizing routines and provide improved structural weight estimations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Ramakrishnan, Tyagi. "Asymmetric Unilateral Transfemoral Prosthetic Simulator". Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5111.

Texto completo
Resumen
amputation, which includes reduced force generation at the knee and ankle, reduced control of the leg, and different mass properties relative to their intact leg. The physical change in the prosthetic leg leads to gait asymmetries that include spatial, temporal, or force differences. This altered gait can lead to an increase in energy consumption and pain due to compensating forces and torques. The asymmetric prosthesis demonstrated in this research aims to find a balance between the different types of asymmetries to provide a gait that is more symmetric and to make it overall easier for an amputee to walk. Previous research has shown that a passive dynamic walker (PDW) with an altered knee location can exhibit a symmetric step length. An asymmetric prosthetic simulator was developed to emulate this PDW with an altered knee location. The prosthetic simulator designed for this research had adjustable knee settings simulating different knee locations. The prosthetic simulator was tested on able-bodied participants with no gait impairments. The kinetic and kinematic data was obtained using a VICON motion capture system and force plates. This research analyzed the kinematic and kinetic data with different knee locations (high, medium, and low) and normal walking. This data was analyzed to find the asymmetries in step length, step time, and ground reaction forces between the different knee settings and normal walking. The study showed that there is symmetry in step lengths for all the cases in overground walking. The knee at the lowest setting was the closest in emulating a normal symmetric step length. The swing times for overground walking showed that the healthy leg swings at almost the same rate in every trial and the leg with the prosthetic simulator can either be symmetric, like the healthy leg or has a higher swing time. Step lengths on the treadmill also showed a similar pattern, and step length of the low knee setting were the closest to the step length of normal walking. The swing times for treadmills did not show a significant trend. Kinetic data from the treadmill study showed that there was force symmetry between the low setting and normal walking cases. In conclusion these results show that a low knee setting in an asymmetric prosthesis may bring about spatial and temporal symmetry in amputee gait. This research is important to demonstrate that asymmetries in amputee gait can be mitigated using a prosthesis with a knee location dissimilar to that of the intact leg. Tradeoffs have to be made to achieve symmetric step length, swing times, or reaction forces. A comprehensive study with more subjects has to be conducted in-order to have a larger sample size to obtain statistically significant data. There is also opportunity to expand this research to observe a wider range of kinetic and kinematic data of the asymmetric prosthesis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Robinson, Marc J. "Simulation of the vacuum assisted resin transfer molding (VARTM) process and the development of light-weight composite bridging". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3336692.

Texto completo
Resumen
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed January 9, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 482-492).
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Auvray, Alexis. "Contributions à l'amélioration de la performance des conditions aux limites approchées pour des problèmes de couche mince en domaines non réguliers". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC018/document.

Texto completo
Resumen
Les problèmes de transmission avec couche mince sont délicats à approcher numériquement, en raison de la nécessité de construire des maillages à l’échelle de la couche mince. Il est courant d’éviter ces difficultés en usant de problèmes avec conditions aux limites approchées — dites d’impédance. Si l’approximation des problèmes de transmission par des problèmes d’impédance s’avère performante dans le cas de domaines réguliers, elle l’est beaucoup moins lorsque ceux-ci comportent des coins ou arêtes. L’objet de cette thèse est de proposer de nouvelles conditions d’impédance, plus performantes, afin de corriger cette perte de performance. Pour cela, les développements asymptotiques des différents problèmes-modèles sont construits et étudiés afin de localiser avec précision l’origine de la perte, en lien avec les profils singuliers associés aux coins et arêtes. De nouvelles conditions d’impédance sont construites, de type Robin multi-échelle ou Venctel. D’abord étudiées en dimension 2, elles sont ensuite généralisées à certaines situations en dimension 3. Des simulations viennent confirmer l’efficience des méthodes théoriques
Transmission problems with thin layer are delicate to approximate numerically, because of the necessity to build meshes on the scale of the thin layer. It is common to avoid these difficulties by using problems with approximate boundary conditions — also called impedance conditions. Whereas the approximation of transmission problems by impedance problems turns out to be successful in the case of smooth domains, the situation is less satisfactory in the presence of corners and edges. The goal of this thesis is to propose new impedance conditions, more efficient, to correct this lack of performance. For that purpose, the asymptotic expansions of the various models -problems are built and studied to locate exactly the origin of the loss, in connection with the singular profiles associated to corners and edges. New impedance conditions are built, of multi-scale Robin or Venctel types. At first studied in dimension 2, they are then generalized in certain situations in dimension 3. Simulations have been carried out to confirm the efficiency of the theoretical methods to some
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Colliri, Tiago Santos. "Avaliação de preços de ações: proposta de um índice baseado nos preços históricos ponderados pelo volume, por meio do uso de modelagem computacional". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/100/100132/tde-07072013-015903/.

Texto completo
Resumen
A importância de se considerar os volumes na análise dos movimentos de preços de ações pode ser considerada uma prática bastante aceita na área financeira. No entanto, quando se olha para a produção científica realizada neste campo, ainda não é possível encontrar um modelo unificado que inclua os volumes e as variações de preços para fins de análise de preços de ações. Neste trabalho é apresentado um modelo computacional que pode preencher esta lacuna, propondo um novo índice para analisar o preço das ações com base em seus históricos de preços e volumes negociados. O objetivo do modelo é o de estimar as atuais proporções do volume total de papéis negociados no mercado de uma ação (free float) distribuídos de acordo com os seus respectivos preços passados de compra. Para atingir esse objetivo, foi feito uso da modelagem dinâmica financeira aplicada a dados reais da bolsa de valores de São Paulo (Bovespa) e também a dados simulados por meio de um modelo de livro de ordens (order book). O valor do índice varia de acordo com a diferença entre a atual porcentagem do total de papéis existentes no mercado que foram comprados no passado a um preço maior do que o preço atual da ação e a sua respectiva contrapartida, que seria a atual porcentagem de papéis existentes no mercado que foram comprados no passado a um preço menor do que o preço atual da ação. Apesar de o modelo poder ser considerado matematicamente bastante simples, o mesmo foi capaz de melhorar significativamente a performance financeira de agentes operando com dados do mercado real e com dados simulados, o que contribui para demonstrar a sua racionalidade e a sua aplicabilidade. Baseados nos resultados obtidos, e também na lógica bastante intuitiva que está por trás deste modelo, acredita-se que o índice aqui proposto pode ser bastante útil na tarefa de ajudar os investidores a definir intervalos ideais para compra e venda de ações no mercado financeiro.
The importance of considering the volumes to analyze stock prices movements can be considered as a well-accepted practice in the financial area. However, when we look at the scientific production in this field, we still cannot find a unified model that includes volume and price variations for stock prices assessment purposes. In this paper we present a computer model that could fulfill this gap, proposing a new index to evaluate stock prices based on their historical prices and volumes traded. The aim of the model is to estimate the current proportions of the total volume of shares available in the market from a stock distributed according with their respective prices traded in the past. In order to do so, we made use of dynamic financial modeling and applied it to real financial data from the Sao Paulo Stock Exchange (Bovespa) and also to simulated data which was generated trough an order book model. The value of our index varies based on the difference between the current proportion of shares traded in the past for a price above the current price of the stock and its respective counterpart, which would be the proportion of shares traded in the past for a price below the current price of the stock. Besides the model can be considered mathematically very simple, it was able to improve significantly the financial performance of agents operating with real market data and with simulated data, which contributes to demonstrate its rationale and its applicability. Based on the results obtained, and also on the very intuitive logic of our model, we believe that the index proposed here can be very useful to help investors on the activity of determining ideal price ranges for buying and selling stocks in the financial market.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Heimbigner, Stephen Matthew. "Implications in Using Monte Carlo Simulation in Predicting Cardiovascular Risk Factors among Overweight Children and Adolescents". Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/iph_theses/11.

Texto completo
Resumen
The prevalence of overweight and obesity among children and adolescents has increased considerably over the last few decades. As a result, increasing numbers of American children are developing multiple risk factors for cardiovascular disease, type II diabetes, hyperinsulinemia, hypertension, dyslipidemia and hepatic steatosis. This thesis examines the use of Monte Carlo computer simulation for understanding risk factors associated with childhood overweight. A computer model is presented for predicting cardiovascular risk factors among overweight children and adolescents based on BMI levels. The computer model utilizes probabilities from the 1999 Bogalusa Heart Study authored by David S. Freedman, William H. Dietz, Sathanur R. Srinivasan and Gerald S. Berenson. The thesis examines strengths, weaknesses and opportunities associated with the developed model. Utilizing this approach, organizations can insert their own probabilities and customized algorithms for predicting future events.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Huang, Bing. "Understanding Operating Speed Variation of Multilane Highways with New Access Density Definition and Simulation Outputs". Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4079.

Texto completo
Resumen
Traffic speed is generally considered a core issue in roadway safety. Previous studies show that faster travel is not necessarily associated with an increased risk of being involved in a crash. When vehicles travel at the same speed in the same direction (even high speeds, as on interstates), they are not passing one another and cannot collide as long as they maintain the same speed. Conversely, the frequency of crashes increases when vehicles are traveling at different rates of speed. There is no doubt that the greater speed variation is, the greater the number of interactions among vehicles is, resulting in higher crash potential. This research tries to identify all major factors that are associated with speed variation on multilane highways, including roadway access density, which is considered to be the most obvious contributing factor. In addition, other factors are considered for this purpose, such as configuration of speed limits, characteristics of traffic volume, geometrics of roadways, driver behavior, environmental factors, etc. A microscopic traffic simulation method based on TSIS (Traffic Software Integrated System) is used to develop mathematical models to quantify the impacts of all possible factors on speed variation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Anderson, Abby Hodel A. Scottedward. "Design, testing, and simulation of a low-cost, light-weight, low-g IMU for the navigation of an indoor blimp". Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Spring/master's/ANDERSON_ABBY_43.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Shah, Manan Kanti. "Material Characterization and Forming of Light Weight Alloys at Elevated Temperature". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306939665.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

CHAKKALAKKAL, JOSEPH JUNIOR. "Design of a weight optimized casted ADI component using topology and shape optimization". Thesis, KTH, Maskin- och processteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-236518.

Texto completo
Resumen
Structural Optimization techniques are widely used in product development process in ‘modern industry’ to generate optimal designs with only sufficient material to serve the purpose of the component. In conventional design problems, the design process usually generates overdesigned components with excess material and weight. This will in turn increase the life time cost of machines, both in terms material wastage and expense of usage. The thesis “Design of a weight optimized casted ADI component using topology and shape optimization” deals with redesigning a component from a welded steel plate structure into a castable design for reduced manufacturing cost and weight reduction. The component “Drill Steel Support” mounted in front of the drilling boom of a Face Drilling Machine is redesigned during this work. The main objective of the thesis is to provide an alternative design with lower weight that can be mounted on the existing machine layout without any changes in the mounting interfaces. This thesis report covers in detail procedure followed for attaining the weight reduction of the “Drill Steel Support” and presents the results and methodology which is based on both topology and shape optimization.
Strukturoptimering används ofta i produktutvecklingsprocessen i modern industri för att ta fram optimala konstruktioner med minsta möjliga materialåtgång för komponenten. Konventionella konstruktionsmetoder genererar vanligtvis överdimensionerade komponenter med överflödigt material och vikt. Detta ökar i sin tur livstidskostnaderna för maskiner både i termer av materialavfall och användning. Avhandlingen "Konstruktion av viktoptimerad gjuten ADI-komponent" behandlar omkonstruktionen av en komponent från en svetsad stålplåtstruktur till en gjutbar konstruktion med minskad tillverkningskostnad och vikt. Komponenten “Borrstöd” monterad i framkant av bommen på en ortdrivningsmaskin är omkonstruerad under detta arbete. Huvudsyftet med avhandlingen är ta fram en alternativ konstruktion med lägre vikt och som kan monteras på befintlig maskinlayout utan någon ändring i monteringsgränssnittet. Denna avhandling innehåller en detaljerad beskrivning av förfarandet för att uppnå viktminskningen av "borrstödet" och presenterar resultaten samt metodiken som baseras på både topologi- och parameter- optimering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Jakobi, Christoph. "Entwicklung und Evaluation eines Gewichtsfenstergenerators für das Strahlungstransportprogramm AMOS". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234133.

Texto completo
Resumen
Effizienzsteigernde Methoden haben die Aufgabe, die Rechenzeit von Monte Carlo Simulationen zur Lösung von Strahlungstransportproblemen zu verringern. Dazu gehören weitergehende Quell- oder Geometrievereinfachungen und die Gewichtsfenstertechnik als wichtigstes varianzreduzierendes Verfahren, entwickelt in den 1950er Jahren. Die Schwierigkeit besteht bis heute in der Berechnung geeigneter Gewichtsfenster. In dieser Arbeit wird ein orts- und energieabhängiger Gewichtsfenstergenerator basierend auf dem vorwärts-adjungierten Generator von T.E. BOOTH und J.S. HENDRICKS für das Strahlungstransportprogramm AMOS entwickelt und implementiert. Dieser ist in der Lage, die Gewichtsfenster sowohl iterativ zu berechnen und automatisch zu setzen als auch, deren Energieeinteilung selbstständig anzupassen. Die Arbeitsweise wird anhand des Problems der tiefen Durchdringung von Photonenstrahlung demonstriert, wobei die Effizienz um mehrere Größenordnungen gesteigert werden kann. Energieabhängige Gewichtsfenster sorgen günstigstenfalls für eine weitere Verringerung der Rechenzeit um etwa eine Größenordnung. Für eine praxisbezogene Problemstellung, die Bestrahlung eines Personendosimeters, kann die Effizienz hingegen bestenfalls vervierfacht werden. Quell- und Geometrieveränderungen sind gleichwertig. Energieabhängige Fenster zeigen keine praxisrelevante Effizienzsteigerung
The purpose of efficiency increasing methods is the reduction of the computing time required to solve radiation transport problems using Monte Carlo techniques. Besides additional geometry manipulation and source biasing this includes in particular the weight windows technique as the most important variance reduction method developed in the 1950s. To date the difficulty of this technique is the calculation of appropriate weight windows. In this work a generator for spatial and energy dependent weight windows based on the forward-adjoint generator by T.E. BOOTH and J.S. HENDRICKS is developed and implemented in the radiation transport program AMOS. With this generator the weight windows are calculated iteratively and set automatically. Furthermore the generator is able to autonomously adapt the energy segmentation. The functioning is demonstrated by means of the deep penetration problem of photon radiation. In this case the efficiency can be increased by several orders of magnitude. With energy dependent weight windows the computing time is decreased additionally by approximately one order of magnitude. For a practice-oriented problem, the irradiation of a dosimeter for individual monitoring, the efficiency is only improved by a factor of four at best. Source biasing and geometry manipulation result in an equivalent improvement. The use of energy dependent weight windows proved to be of no practical relevance
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Pires, dos Santos Rebecca. "The Application of Artificial Neural Networks for Prioritization of Independent Variables of a Discrete Event Simulation Model in a Manufacturing Environment". BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6431.

Texto completo
Resumen
The high complexity existent in businesses has required managers to rely on accurate and up to date information. Over the years, many tools have been created to give support to decision makers, such as discrete event simulation and artificial neural networks. Both tools have been applied to improve business performance; however, most of the time they are used separately. This research aims to interpret artificial neural network models that are applied to the data generated by a simulation model and determine which inputs have the most impact on the output of a business. This would allow prioritization of the variables for maximized system performance. A connection weight approach will be used to interpret the artificial neural network models. The research methodology consisted of three main steps: 1) creation of an accurate simulation model, 2) application of artificial neural network models to the output data of the simulation model, and 3) interpretation of the artificial neural network models using the connection weight approach. In order to test this methodology, a study was performed in the raw material receiving process of a manufacturing facility aiming to determine which variables impact the most the total time a truck stays in the system waiting to unload its materials. Through the research it was possible to observe that artificial neural network models can be useful in making good prediction about the system they model. Moreover, through the connection weight approach, artificial neural network models were interpreted and helped determine the variables that have the greatest impact on the modeled system. As future research, it would be interesting to use this methodology with other data mining algorithms and understand which techniques have the greatest capabilities of determining the most meaningful variables of a model. It would also be relevant to use this methodology as a resource to not only prioritize, but optimize a simulation model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Esquincalha, Agnaldo da Conceição. "Estimação de parâmetros de sinais gerados por sistemas lineares invariantes no tempo". Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=1238.

Texto completo
Resumen
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro
Nesta dissertação é apresentado um estudo sobre a recuperação de sinais modelados por somas ponderadas de exponenciais complexas. Para tal, são introduzidos conceitos elementares em teoria de sinais e sistemas, em particular, os sistemas lineares invariantes no tempo, SLITs, que podem ser representados matematicamente por equações diferenciais, ou equações de diferenças, para sinais analógicos ou digitais, respectivamente. Equações deste tipo apresentam como solução somas ponderadas de exponenciais complexas, e assim fica estabelecida a relação entre os sistemas de tipo SLIT e o modelo em estudo. Além disso, são apresentadas duas combinações de métodos utilizadas na recuperação dos parâmetros dos sinais: métodos de Prony e mínimos quadrados, e métodos de Kung e mínimos quadrados, onde os métodos de Prony e Kung recuperam os expoentes das exponenciais e o método dos mínimos quadrados recupera os coeficientes lineares do modelo. Finalmente, são realizadas cinco simulações de recuperação de sinais, sendo a última, uma aplicação na área de modelos de qualidade de água.
A study on the recovery of signals modeled by weighted sums of complex exponentials complex is presented. For this, basic concepts of signals and systems theory are introduced. In particular, the linear time invariant systems (LTI Systems) are considered, which can be mathematically represented by differential equations or difference equations, respectively, for analog or digital signals. The solution of these types of equations is given by a weighted sum of complex exponentials, so the relationship between the LTI Systems and the model of study is established. Furthermore, two combinations of methods are used to recover the parameters of the signals: Prony and least squares methods, and Kung and least squares methods, where Prony and Kung methods are used to recover the exponents of the exponentials and the least square method is used to recover the linear coefficients of the model. Finally, five simulations are performed for the recovery of signals, the last one being an application in the area of water quality models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Olsson, Jörgen. "Low Frequency Impact Sound in Timber Buildings : Simulations and Measurements". Licentiate thesis, Linneaus Univeristy, Sweden; SP Technical Research Institute of Sweden, Sweden, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-58068.

Texto completo
Resumen
An increased share of construction with timber is one possible way of achieving more sustainable and energy-efficient life cycles of buildings. The main reason is that wood is a renewable material and buildings require a large amount of resources. Timber buildings taller than two storeys were prohibited in Europe until the 1990s due to fire regulations. In 1994, this prohibition was removed in Sweden.     Some of the early multi-storey timber buildings were associated with more complaints due to impact sound than concrete buildings with the same measured impact sound class rating. Research in later years has shown that the frequency range used for rating has not been sufficiently low in order to include all the sound characteristics that are important for subjective perception of impact sound in light weight timber buildings. The AkuLite project showed that the frequency range has to be extended down to 20 Hz in order to give a good quality of the rating. This low frequency range of interest requires a need for knowledge of the sound field distribution, how to best measure the sound, how to predict the sound transmission levels and how to correlate numerical predictions with measurements.     Here, the goal is to improve the knowledge and methodology concerning measurements and predictions of low frequency impact sound in light weight timber buildings. Impact sound fields are determined by grid measurements in rooms within timber buildings with different designs of their joist floors. The measurements are used to increase the understanding of impact sound and to benchmark different field measurement methods. By estimating transfer functions, from impact forces to vibrations and then sound pressures in receiving rooms, from vibrational test data, improved possibilities to correlate the experimental results to numerical simulations are achieved. A number of excitation devices are compared experimentally to evaluate different characteristics of the test data achieved. Further, comparisons between a timber based hybrid joist floor and a modern concrete floor are made using FE-models to evaluate how stiffness and surface mass parameters affect the impact sound transfer and the radiation.     The measurements of sound fields show that light weight timber floors in small rooms tend to have their highest sound levels in the low frequency region, where the modes are well separated, and that the highest levels even can occur below the frequency of the first room mode of the air. In rooms with excitation from the floor above, the highest levels tend to occur at the floor levels and in the floor corners, if the excitation is made in the middle of the room above. Due to nonlinearities, the excitation levels may affect the transfer function in low frequencies which was shown in an experimental study. Surface mass and bending stiffness of floor systems are shown, by simulations, to be important for the amount of sound radiated.     By applying a transfer function methodology, measuring the excitation forces as well as the responses, improvements of correlation analyses between measurements and simulations can be achieved
ProWood
Silent Timber Build
Urban Tranquility
BioInnovation FBBB
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Staffan, Paul. "Design of an ultra-wideband microstrip antenna array with low size, weight and power". Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1578437280799995.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Davies, G. J. "Towards an agent-based model for risk-based regulation". Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/5662.

Texto completo
Resumen
Risk-based regulation has grown rapidly as a component of Government decision making, and as such, the need for an established evidence-based framework for decisions about risk has become the new mantra. However, the process of brokering scientific evidence is poorly understood and there is a need to improve the transparency of this brokering process and decisions made. This thesis attempts to achieve this by using agent-based simulation to model the influence that power structures and participating personalities has on the brokering of evidence and thereby the confidence-building exercise that characterises risk-based regulation. As a prerequisite to the adoption of agent-based techniques for simulating decisions under uncertainty, this thesis provides a critical review of the influence power structure and personality have on the brokering of scientific evidence that informs risk decisions. Three case studies, each representing a different perspective on risk-based regulation are presented: nuclear waste disposal, the disposal of avian-influenza infected animal carcases and the reduction of dietary salt intake. Semi-structured interviews were conducted with an expert from each case study, and the logical sequence in which decisions were made was mapped out and used to inform the development of an agent-based simulation model. The developed agent-based model was designed to capture the character of the brokering process by transparently setting out how evidence is transmitted from the provider of evidence to the final decision maker. It comprises of two agents, a recipient and provider of evidence, and draws upon a historic knowledge base to permit the user to vary components of the interacting agents and of the decision-making procedure, demonstrating the influence that power structure and personality has on agent receptivity and the confidence attached to a number of different lines of evidence. This is a novel step forward because it goes beyond the scope of current risk management frameworks, for example, permitting the user to explore the influence that participants have in weighing and strengthening different lines of evidence and the impact this has on the final decision outcome.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Castro, Jaime. "Influence of random formation on paper mechanics : experiments and theory". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/7016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Li, Qi. "Acoustic noise emitted from overhead line conductors". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/acoustic-noise-emitted-from-overhead-line-conductors(90a5c23c-a7fc-4230-bbab-16b8737b2af2).html.

Texto completo
Resumen
The developments of new types of conductors and increase of voltage level have driven the need to carry out research on evaluating overhead line acoustic noise. The surface potential gradient of a conductor is a critical design parameter for planning overhead lines, as it determines the level of corona loss (CL), radio interference (RI), and audible noise (AN). The majority of existing models for surface gradient calculation are based on analytical methods which restrict their application in simulating complex surface geometries. This thesis proposes a novel method which utilizes both analytical and numerical procedures to predict the surface gradient. Stranding shape, proximity of tower, protrusions and bundle arrangements are considered within this model. One of UK National Grid's transmission line configurations has been selected as an example to compare the results for different methods. The different stranding shapes are a key variable in determining dry surface fields. The dynamic behaviour of water droplets subject to AC electric fields is investigated by experiment and finite element modelling. The motion of a water droplet is considered on the surface of a metallic sphere. To understand the consequences of vibration, the FEA model is introduced to study the dynamics of a single droplet in terms of phase shift between vibration and exciting voltage. Moreover, the evolution of electric field within the whole cycle of vibration is investigated. The profile of the electric field and the characteristics of mechanical vibration are evaluated. Surprisingly the phase shift between these characteristics results in the maximum field occurring when the droplet is in a flattened profile rather than when it is ‘pointed’.Research work on audible noise emitted from overhead line conductors is reviewed, and a unique experimental set up employing a semi-anechoic chamber and corona cage is described. Acoustically, this facility isolates undesirable background noise and provides a free-field test space inside the anechoic chamber. Electrically, the corona cage simulates a 3 m section of 400 kV overhead line conductors by achieving the equivalent surface gradient. UV imaging, acoustic measurements and a partial discharge detection system are employed as instrumentation. The acoustic and electrical performance is demonstrated through a series of experiments. Results are discussed, and the mechanisms for acoustic noise are considered. A strategy for evaluating the noise emission level for overhead line conductors is developed. Comments are made on predicting acoustic noise from overhead lines. The technical achievements of this thesis are summarized in three aspects. First of all, an FEA model is developed to calculate the surface electric field for overhead line conductors and this has been demonstrated as an efficient tool for power utilities in computing surface electric field especially for dry condition. The second achievement is the droplet vibration study which describes the droplets' behaviour under rain conditions, such as the phase shift between the voltage and the vibration magnitude, the ejection phenomena and the electric field enhancement due to the shape change of droplets. The third contribution is the development of a standardized procedure in assessing noise emission level and the characteristics of noise emissions for various types of existing conductors in National Grid.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía