Tesis sobre el tema "Model correction"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Model correction".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Bäckström, Fredrik y Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models". Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.
Texto completoModeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.
Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.
The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.
The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.
Kokkola, N. "A double-error correction computational model of learning". Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/18838/.
Texto completoBulygina, Nataliya. "Model Structure Estimation and Correction Through Data Assimilation". Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195345.
Texto completoHu, Zhongbo. "Atmospheric artifacts correction for InSAR using empirical model and numerical weather prediction models". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668264.
Texto completoLas técnicas lnSAR han demostrado su capacidad sin precedentes y méritos para el monitoreo de la deformaci6n del suelo a gran escala con una precisión centimétrica o incluso milimétrica. Sin embargo, varios factores afectan la fiabilidad y precisión de sus aplicaciones. Entre ellos, los artefactos atmosféricos debidos a variaciones espaciales y temporales del estado de la atm6sfera a menudo añaden ruido a los interferogramas. Por lo tanto, la mitigación de los artefactos atmosféricos sigue siendo uno de los mayores desafíos a abordar en la comunidad lnSAR. Los trabajos de investigaci6n de vanguardia han revelado que los artefactos atmosféricos se pueden compensar parcialmente con modelos empíricos, enfoque de filtrado temporal-espacial en series temporales lnSAR, retardo puntual del camino cenital con GPS y modelos numéricos de predicción meteorológica. En esta tesis, en primer lugar, desarrollamos un método de corrección de modelo empírico lineal ponderado por covarianza. En segundo lugar, se emplea un enfoque realista de integracion de dirección LOS basado en datos de reanálisis global y se compara exhaustivamente con el método convencional que se integra a lo largo de la dirección cenital. Finalmente, el método de integraci6n realista se aplica a los datos del modelo de pronóstico numérico WRF local. Ademas, se evalúan las comparaciones detalladas entre diferentes datos de reanálisis global y el modelo WRF local. En términos de métodos de corrección con modelos empíricos, muchas publicaciones han estudiado la corrección del retraso estratificado de la fase troposférica asumiendo un modelo lineal entre ellos y la topografía. Sin embargo, la mayoría de estos estudios no han considerado el efecto de los artefactos atmosféricos turbulentos al ajustar el modelo lineal a los datos. En esta tesis, se ha presentado una técnica mejorada que minimiza la influencia de la atm6sfera turbulenta en el ajuste del modelo. En el algoritmo propuesto, el modelo se ajusta a las diferencias de fase de los pixeles en lugar de utilizar la fase sin desenrollar de cada pixel. Además, las diferentes diferencias de fase se ponderan en función de su covarianza APS estimada a partir de un variograma empírico para reducir en el ajuste del modelo el impacto de los pares de pixeles con una atm6sfera turbulenta significativa. El rendimiento del método propuesto ha sido validado con datos SAR Sentinel-1 simulados y reales en la isla de Tenerife, España. Teniendo en cuenta los métodos que utilizan observaciones meteorológicas para mitigar APS, se ha implementado una estrategia de computación realista y precisa que utiliza datos de reanálisis atmosférico global. Con el enfoque, se considera el camino realista de LOS a lo largo del satélite y los puntos monitoreados, en lugar de convertirlos desde el retardo de la ruta cenital. En comparación con el método basado en la demora cenital, la mayor ventaja es que puede evitar errores causados por el comportamiento atmosférico anisotrópico. El método de integración preciso se valida con los datos de Sentinel-1 en tres sitios de prueba: la isla de Tenerife, España, Almería, España y la isla de Creta, Grecia. En comparación con el método cenital convencional, el método de integración realista muestra una gran mejora.
Pointoin, Barry William. "Model-based randoms correction for 3D positron emission tomography". Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31046.
Texto completoScience, Faculty of
Physics and Astronomy, Department of
Graduate
Molin, Simon. "House Price Dynamics in Sweden : Vector error-correction model". Thesis, Umeå universitet, Nationalekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172367.
Texto completoZechman, Emily Michelle. "Improving Predictability of Simulation Models using Evolutionary Computation-Based Methods for Model Error Correction". NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-08082005-105133/.
Texto completoKurachi, Masafumi, Robert Shrock y Koichi Yamawaki. "Z boson propagator correction in technicolor theories with extended technicolor effects included". American Physical Society, 2007. http://hdl.handle.net/2237/11301.
Texto completoMaurer, Dustin. "Comparison of background correction in tiling arrays and a spatial model". Kansas State University, 2011. http://hdl.handle.net/2097/12130.
Texto completoDepartment of Statistics
Susan J. Brown
Haiyan Wang
DNA hybridization microarray technologies have made it possible to gain an unbiased perspective of whole genome transcriptional activity on such a scale that is increasing more and more rapidly by the day. However, due to biologically irrelevant bias introduced by the experimental process and the machinery involved, correction methods are needed to restore the data to its true biologically meaningful state. Therefore, it is important that the algorithms developed to remove any sort of technical biases are accurate and robust. This report explores the concept of background correction in microarrays by using a real data set of five replicates of whole genome tiling arrays hybridized with genetic material from Tribolium castaneum. It reviews the literature surrounding such correction techniques and explores some of the more traditional methods through implementation on the data set. Finally, it introduces an alternative approach, implements it, and compares it to the traditional approaches for the correction of such errors.
Leach, Mark Daniel. "A discrete, stochastic model and correction method for bacterial source tracking". Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Spring2007/m_leach_050207.pdf.
Texto completoGranholm, George Richard 1976. "Near-real time atmospheric density model correction using space catalog data". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/44899.
Texto completoIncludes bibliographical references (p. 179-184).
Several theories have been presented in regard to creating a neutral density model that is corrected or calibrated in near-real time using data from space catalogs. These theories are usually limited to a small number of frequently tracked "calibration satellites" about which information such as mass and crosssectional area is known very accurately. This work, however, attempts to validate a methodology by which drag information from all available low-altitude space objects is used to update any given density model on a comprehensive basis. The basic update and prediction algorithms and a technique to estimate true ballistic factors are derived in detail. A full simulation capability is independently verified. The process is initially demonstrated using simulated range, azimuth, and elevation observations so that issues such as required number and types of calibration satellites, density of observations, and susceptibility to atmospheric conditions can be examined. Methods of forecasting the density correction models are also validated under different atmospheric conditions.
by George Richard Granholm.
S.M.
Mendoza, Juan Pablo. "Regions of Inaccurate Modeling for Robot Anomaly Detection and Model Correction". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1059.
Texto completoWatkins, Yijing Zhang. "Image Compression and Channel Error Correction using Neurally-Inspired Network Models". OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1529.
Texto completoCirineo, Tony y Bob Troublefield. "STANDARD INTEROPERABLE DATALINK SYSTEM, ENGINEERING DEVELOPMENT MODEL". International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608398.
Texto completoThis paper describes an Engineering Development Model (EDM) for the Standard Interoperable Datalink System (SIDS). This EDM represents an attempt to design and build a programmable system that can be used to test and evaluate various aspects of a modern digital datalink. First, an investigation was started of commercial wireless components and standards that could be used to construct the SIDS datalink. This investigation lead to the construction of an engineering developmental model. This model presently consists of wire wrap and prototype circuits that implement many aspects of a modern digital datalink.
Silber, Frank. "Makroökonometrische Anpassungsanalyse im Vector-Error-Correction-Model (VECM) : Untersuchungen an ausgewählten Arbeitsmärkten /". Frankfurt am Main: Lang, 2003. http://www.gbv.de/dms/zbw/362076561.pdf.
Texto completoFlouri, Dimitra. "Tracer-kinetic model-driven motion correction with application to renal DCE-MRI". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/16485/.
Texto completoRosa, Cristian. "Vérification des performances et de la correction des systèmes distribués". Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10113/document.
Texto completoDistributed systems are in the mainstream of information technology. It has become standard to rely on multiple distributed units to improve the performance of the application, help tolerate component failures, or handle problems too large to fit in a single processing unit. The design of algorithms adapted to the distributed context is particularly difficult due to the asynchrony and the nondeterminism that characterize distributed systems. Simulation offers the ability to study the performance of distributed applications without the complexity and cost of the real execution platforms. On the other hand, model checking allows to assess the correctness of such systems in a fully automatic manner. In this thesis, we explore the idea of integrating a model checker with a simulator for distributed systems in a single framework to gain performance and correctness assessment capabilities. To deal with the state explosion problem, we present a dynamic partial order reduction algorithm that performs the exploration based on a reduced set of networking primitives, that allows to verify programs written for any of the communication APIs offered by the simulator. This is only possible after the development of a full formal specification with the semantics of these networking primitives, that allows to reason about the independency of the communication actions as required by the DPOR algorithm. We show through experimental results that our approach is capable of dealing with non trivial unmodified C programs written for the SimGrid simulator. Moreover, we propose a solution to the problem of scalability for CPU bound simulations, envisioning the simulation of Peer-to-Peer applications with millions of participating nodes. Contrary to classical parallelization approaches, we propose parallelizing some internal steps of the simulation, while keeping the whole process sequential. We present a complexity analysis of the simulation algorithm, and we compare it to the classical sequential algorithm to obtain a criteria that describes in what situations a speed up can be expected. An important result is the observation of the relation between the precision of the models used to simulate the hardware resources, and the potential degree of parallelization attainable with this approach. We present several case studies that benefit from the parallel simulation, and we show the results of a simulation at unprecedented scale of the Chord Peer-to-Peer protocol with two millions nodes executed in a single machine
Ercolani, Marco G. "Price uncertainty, investment and consumption". Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265023.
Texto completoGiroud, Xavier. "A Markov-Switching Equilibrium Correction Model for Intraday Futures and Stock Index Returns". St. Gallen, 2004. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/99630345001/$FILE/99630345001.pdf.
Texto completoLindgren, Jonathan. "Modeling credit risk for an SME loan portfolio: An Error Correction Model approach". Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136176.
Texto completoSince the global financial crisis of 2008, several big regulations have been implemented to assure that banks follow sound risk management. Among these are the Basel II Accords that implement capital requirements for credit risk. The core measures of credit risk evaluation are the Probability of Default and Loss Given Default. The Basel II Advanced Internal-Based-Rating Approach allows banks to model these measures for individual portfolios and make their own evaluations. This thesis, in compliance with the Advanced Internal-Based-rating approach, evaluates the use of an Error Correction Model when modeling the Probability of Default. A model proven to be strong in stress testing. Furthermore, a Loss Given Default function is implemented that ties Probability of Default and Loss Given Default to systematic risk. The Error Correction Model is implemented on an SME portfolio from one of the "big four" banks in Sweden. The model is evaluated and stress tested with the European Banking Authority's 2016 stress test scenario and analyzed, with promising results.
Santana, Amarilio Luiz de. "Forecasts for collection of VAT in CearÃ: a model analysis with error correction". Universidade Federal do CearÃ, 2009. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4340.
Texto completoThis research aims to offer managers of the State of Cearà a choice of tool to perform estimates of the monthly tax collection Movement of Goods and Services (ICMS) through econometric model consistent with a good predictive power. For that, it was used models of bug fixes, ECM, and the vector cointegrante was estimated by DOLS (Dynamic Ordinary Least Squares). The forecasts generated by the research confirms the ability of ECM for generation of prediction, due to the small error margin. In addition, comparisons were made with the forecasts made by SEFAZ-CE and the de Rocha Neto (2008) opportunity for ARIMA models, thus we can say that the model used here is more accurate than the method used by the Secretary of Finance and the ARIMA to perform estimates of monthly collections of ICMS.
Esta pesquisa tem como objetivo oferecer aos gestores do Estado do Cearà uma opÃÃo de ferramenta para realizar previsÃo de arrecadaÃÃo mensal do Imposto sobre CirculaÃÃo de Mercadorias e ServiÃos (ICMS), por meio de um modelo economÃtrico consistente e com um bom poder preditivo. Para isso, foram utilizados modelos de correÃÃes de erros, MCE, sendo que o vetor cointegrante foi estimado por DOLS (Dynamic Ordinary Least Squares). As previsÃes geradas pela pesquisa confirmam a capacidade do MCE para geraÃÃo de previsÃo, devido à pequena margem de erro. AlÃm disso, foram feitas comparaÃÃes com as previsÃes realizadas pela SEFAZ-CE e com as de Rocha Neto (2008) ensejadas por modelos ARIMA, deste modo, pode-se dizer que o modelo empregado aqui à mais acurado do que o mÃtodo utilizado pela SecretÃria da Fazenda e do que o ARIMA para realizar previsÃo de arrecadaÃÃo mensal de ICMS.
Mazzocco, Philip James. "Moderators of the effects of mental imagery on persuasion the cognitive resources model and the imagery correction model /". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1127050519.
Texto completoTitle from first page of PDF file. Document formatted into pages; contains xvi, 251 p.; also includes graphics. Includes bibliographical references (p. 157-174). Available online via OhioLINK's ETD Center
Balucan, Phillip James 1977. "Model reduction of a set of elastic, nested gimbals by component mode selection criteria and static correction modes". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/17520.
Texto completoIncludes bibliographical references (p. 112-113).
Model reduction techniques provide for a computationally inexpensive method for solving elastic dynamic problems with complex structures. The elastic nested gimbal problem is a problem which requires model reduction techniques as a means to reduce the dynamic equations. This is done using two methods: one technique employs mode ranking criteria to select modes which influence the dynamics of the problem the most. The second involves the use of static correction modes along with vibration modes to simulate the dynamics of this nested gimbal model. A model of the structure is described in terms of a lumped-parameter finite element model. This mathematical model of the physical system serves as the ha.sis for developing model reduction techniques for the nested gimbal problem. A truth model based on given initial conditions is used to compare the accuracy of the model reduced problem. A number of model reduction theories are described and applied to the gimbal simulation. The equations for the mode ranking techniques and the static and vibration mode analysis are developed as well as a quantitative error measure. Comparisons are made with the truth model using the mode ranking criteria base on the momentum coefficients and the frequency cutoff criteria. Test cases are also run using the static correction modes with vibration modes and static correction modes with the ranked vibration modes using momentum coefficients. The use of various static modes is discussed during the implementation of the static correction mode method. Applying the model reduction theories to a set of elastic, nested gimbals, the mode ranking criteria provides better results based on the error measure than the frequency cutoff criteria when the simulation is run using less than twenty-five modes. Using static modes along with ranked modes to represent the elastic dynamics of the problem does not provide better results than using the unranked vibration modes with the static modes. Modeling the dynamics using static correction modes with the unranked vibration modes provides the best results while using the lea.st number of modes. It is advantageous to take into account the given conditions applied to the system when reducing the model of a complex dynamic problem.
by Phillip James Balucan.
S.M.
Lee, Shiyoung. "Effects of Input Power Factor Correction on Variable Speed Drive Systems". Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/26493.
Texto completoPh. D.
Kim, Y. S. y R. Eng. "Estimation of Tec and Range of EMP Source Using an Improved Ionospheric Correction Model". International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/611957.
Texto completoAn improved ionospheric delay correction model for a transionospheric electromagnetic pulse (EMP) is used for estimating the total-electron-content (TEC) profile of the path and accurate ranging of the EMP source. For a known pair of time of arrival (TOA) measurements at two frequency channels, the ionospheric TEC information is estimated using a simple numerical technique. This TEC information is then used for computing ionospheric group delay and pulse broadening effect correction to determine the free space range. The model prediction is compared with the experimental test results. The study results show that the model predictions are in good agreement with the test results.
Yerrabolu, Pavan. "Correction model based ANN modeling approach for the estimation of Radon concentrations in Ohio". University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1341604941.
Texto completoTunehed, Per. "Is the Swedish housing market overvalued? : An analysis using a Vector error correction model". Thesis, Umeå universitet, Nationalekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185129.
Texto completoSwitanek, Matthew B., Peter A. Troch, Christopher L. Castro, Armin Leuprecht, Hsin-I. Chang, Rajarshi Mukherjee y Eleonora M. C. Demaria. "Scaled distribution mapping: a bias correction method that preserves raw climate model projected changes". COPERNICUS GESELLSCHAFT MBH, 2017. http://hdl.handle.net/10150/624439.
Texto completoWegener, Duane Theodore. "The flexible correction model : using naive theories of bias to correct assessments of targets". Connect to resource, 1994. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1234615264.
Texto completoJohansson, Nils. "Estimation of fatigue life by using a cyclic plasticity model and multiaxial notch correction". Thesis, Linköpings universitet, Mekanik och hållfasthetslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158095.
Texto completoMohapatra, Sucheta. "Development and quantitative assessment of a beam hardening correction model for preclinical micro-CT". Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3500.
Texto completoRajam, G. "The UK food chain : restructuring, strategies and price transmission". Thesis, University of Nottingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243617.
Texto completoMena, Andrade Ramiro Francisco. "Fast simulation : assisted shape correction after machining". Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671497.
Texto completoLas distorsiones después del mecanizado de grandes piezas de aluminio son un problema recurrente para la industria aeronáutica. Estas desviaciones de la geometría de diseño se deben a la presencia de tensiones residuales, que se desarrollan a lo largo de la cadena de fabricación, especialmente después del tratamiento térmico de temple. Para restablecer la geometría nominal, es necesario realizar una serie de operaciones de remodelación muy manuales y que requieren mucho tiempo. El presente trabajo de investigación se centra en el desarrollo de herramientas eficaces de simulación numérica para ayudar a los operadores en el enderezamiento por flexión, que es una de las operaciones de remodelación más comunes. Para ello, se desarrolla un modelo de simulación de elementos finitos representativo de la cadena de fabricación, incluyendo el temple, el mecanizado y la remodelación, que permite predecir las tensiones y distorsiones residuales en piezas forjadas de aluminio de paredes gruesas. El modelo se valida con los datos experimentales que se encuentran en la literatura. A continuación, se introduce el concepto de diagramas de remodelación, una herramienta que permite seleccionar una carga de flexión casi óptima para minimizar la distorsión. Se muestra que el diagrama de remodelación no necesita tener en cuenta el campo de tensión residual, ya que su único efecto es desfasar horizontalmente el diagrama de remodelación por una cierta distancia. Por lo tanto, el comportamiento general que incluye un campo de tensión residual tridimensional real en una pieza forjada puede recuperarse desplazando el diagrama de remodelación libre de tensión residual por el desfase apropiado. Por último, se propone una estrategia para identificar el desfase sobre la marcha durante la operación de remodelación utilizando medidas sencillas de fuerza-desplazamiento. A continuación, se explora el uso de nuevas técnicas numéricas, especialmente la reducción del orden del modelo (MOR), con un doble propósito: i) acelerar el cálculo de los diagramas de remodelación; y ii) tener en cuenta varios parámetros del proceso, como la distorsión inicial o la configuración de la remodelación. Para ello, nos basamos en el método de Sparse Subspace Learning (SSL), un método MOR no intrusivo que permite reconstruir el espacio de solución directamente a partir de los resultados del modelo de elementos finitos. Con la solución paramétrica a mano, se puede encontrar la configuración óptima de remodelación en tiempo real, para minimizar la distorsión antes de lanzar la operación de remodelación real. Por último, se propone los primeros pasos hacia la ampliación de la metodología anterior, que combina los diagramas de remodelación y los métodos MOR, a un entorno multietapa en el que se realizan varias operaciones de corrección de la forma de manera secuencial.
La simulation numérique est reconnue comme étant la troisième branche de la science. À partir des travaux fondateurs de Turner et al. (1956) sur les éléments finis, des améliorations impressionnantes ont été apportées en termes de software et de hardware, de sorte qu’aujourd’hui, il n’existe plus aucun dispositif d’ingénierie (au sens large du terme) qui ne soit pas transcrit par une quelconque simulation.Cependant, tout comme il existe un fossé entre théorie et pratique, il existe encore des problèmes au niveau industriel où, pour le moment, les simulations numériques n’ont pas encore pris la place des approches artisanales. L’atténuation de la distorsion dans les grandes pièces forgées en aluminium est l’un des exemples.La distorsion post-usinage est un problème ouvert qui affecte chaque grande pièce forgée en aluminium à paroi épaisse assemblée sur un avion. Cette distorsion provient de la présence de contraintes résiduelles (RS) développées tout au long de la chaine de fabrication, en particulier après traitement thermique par trempe.Lorsque l’usinage a lieu, il provoque une redistribution à un niveau interne car l´état d’équilibre précédent est rompu par l’action d’enlèvement de matière. Au niveau théorique, si les RS d’une pièce sont connues à l’avance, une séquence d’usinage appropriée pourrait être planifiée dans le but d’atténuer ou de contrecarrer une géométrie déformée. Cette stratégie est déjà mise en œuvre pour les pièces usinées à partir de tôles laminées, où la RS peut être considérée comme constante dans la direction longitudinale. Cependant, pour les pièces forgées, les RS sont fonction de la géométrie et, par conséquent, un champ de contrainte tridimensionnel complexe est présent. D’importants efforts de recherche sont réalisés afin de prédire numériquement les RS pour les pièces forgées, mais la nature déterministe des simulations numériques ne permet toujours pas de saisir le comportement variable des déformations. Pour l’instant, l’orientation actuelle de la recherche considère le problème des distorsions comme quelque chose à éviter depuis le début de la chaine de fabrication, c’est-à-dire en concentrant les efforts dans les étapes précédentes, où au plus tard pendant l’usinage. Dans la présente thèse, nous avons choisi de suivre la direction opposée, c’est-à-dire comment procéder et gérer la distorsion une fois qu’elle est apparue. Pour réaliser cette tâche, le problème a d’abord été étudié par la méthode classique des éléments finis (FEM), et par la suite en appliquant une technique non intrusive de réduction de l’ordre des modèles (ROM) appelée ”Sparse Subspace Learning” (SSL).Le contenu de cette thèse est structuré comme suit. Dans le chapitre 1 le problème de la distorsion sera introduit, suivi de la définition et de l’interconnexion des trois acteurs principaux : les contraintes résiduelles, la distorsion et le redressage. Les techniques de redressage seront ensuite passées en revue pour finalement examiner les défis et les perspectives de la simulation du redressage. Le chapitre 2 présente deux modèles numériques consacres à la détermination des contraintes résiduelles après la trempe et le pliage plastique, le traitement thermique étant considéré comme la principale source de contraintes résiduelles dans les pièces forgées en aluminium. C’est dans ce cadre (origine majoritairement thermique des contraintes résiduelles) que sera étudiée l’opération de redressage sélectionnée dans cette thèse.Après leurs validations, les deux modèles sont appliqués dans les chapitres suivants où ils constitueront la solution de référence du problème. Dans le chapitre 3, les diagrammes de mise en forme sont présentés comme un outil d’aide à l’opération de redressement par flexion. En outre, l’hypothèse reste sans contrainte résiduelle est présentée comme une alternative pour étudier le problème de redressage. Cette approche utilise la géométrie déformée comme entrée principale permettant de simuler l’étape de redressage en considérant la pièce sans contraintes résiduelles. Le chapitre 4 montrera une étude multiparamétrique du redressement par flexion à l’aide du SSL.Les diagrammes de redressage seront généralisés pour un ensemble de paramètres préalablement définis. Le redressage étant une procédure itérative et séquentielle, le chapitre 5 explicitera la simulation de deux opérations consécutives de redressement par flexion. Différentes stratégies de redressage seront étudiées, et une méthodologie sera fournie pour aborder de manière plus systématique le problème ouvert du redressage.
Högström, Martin. "Wind Climate Estimates - Validation of Modelled Wind Climate and Normal Year Correction". Thesis, Uppsala University, Air and Water Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8023.
Texto completoLong time average wind conditions at potential wind turbine sites are of great importance when deciding if an investment will be economically safe. Wind climate estimates such as these are traditionally done with in situ measurements for a number of months. During recent years, a wind climate database has been developed at the Department of Earth Sciences, Meteorology at Uppsala University. The database is based on model runs with the higher order closure mesoscale MIUU-model in combination with long term statistics of the geostrophic wind, and is now used as a complement to in situ measurements, hence speeding up the process of turbine siting. With this background, a study has been made investigating how well actual power productions during the years 2004-2006 from 21 Swedish wind turbines correlate with theoretically derived power productions for the corresponding sites.
When comparing theoretically derived power productions based on long term statistics with measurements from a shorter time period, correction is necessary to be able to make relevant comparisons. This normal year correction is a main focus, and a number of different wind energy indices which are used for this purpose are evaluated. Two publicly available (Swedish and Danish Wind Index) and one derived theoretically from physical relationships and NCEP/NCAR reanalysis data (Geostrophic Wind Index). Initial testing suggests in some cases very different results when correcting with the three indices and further investigation is necessary. An evaluation of the Geostrophic Wind Index is made with the use of in situ measurements.
When correcting measurement periods limited in time to a long term average, a larger statistical dispersion is expected with shorter measurement periods, decreasing with longer periods. In order to investigate this assumption, a wind speed measurement dataset of 7 years were corrected with the Geostrophic Wind Index, simulating a number of hypothetical measurement periods of various lengths. When normal year correcting a measurement period of specific length, the statistical dispersion decreases significantly during the first 10 months. A reduction to about half the initial statistical dispersion can be seen after just 5 months of measurements.
Results show that the theoretical normal year corrected power productions in general are around 15-20% lower than expected. A probable explanation for the larger part of this bias is serious problems with the reported time-not-in-operation for wind turbines in official power production statistics. This makes it impossible to compare actual power production with theoretically derived without more detailed information. The theoretically derived Geostrophic Wind Index correlates well to measurements, however a theoretically expected cubed relationship of wind speed seem to account for the total energy of the wind. Such an amount of energy can not be absorbed by the wind turbines when wind speed conditions are a lot higher than normal.
Vindklimatet vid tänkbara platser för uppförande av vindkraftverk är avgörande när det beslutas huruvida det är en lämplig placering eller ej. Bedömning av vindklimatet görs vanligtvis genom vindmätningar på plats under ett antal månader. Under de senaste åren har en vindkarteringsdatabas utvecklats vid Institutionen för Geovetenskaper, Meteorologi vid Uppsala universitet. Databasen baseras på modellkörningar av en högre ordningens mesoskale-modell, MIUU-modellen, i kombination med klimatologisk statistik för den geostrofiska vinden. Denna används numera som komplement till vindmätningar på plats, vilket snabbar upp bedömningen av lämpliga platser. Mot denna bakgrund har en studie genomförts som undersöker hur bra faktisk energiproduktion under åren 2004-2006 från 21 vindkraftverk stämmer överens med teoretiskt härledd förväntad energiproduktion för motsvarande platser. Om teoretiskt härledd energiproduktion baserad på långtidsstatistik ska jämföras med mätningar från en kortare tidsperiod måste korrektion ske för att kunna göra relevanta jämförelser. Denna normalårskorrektion genomförs med hjälp av olika vindenergiindex. En utvärdering av de som finns allmänt tillgängliga (Svenskt vindindex och Danskt vindindex) och ett som härletts teoretiskt från fysikaliska samband och NCEP/NCAR återanalysdata (Geostrofiskt vindindex) görs. Inledande tester antyder att man får varierande resultat med de tre indexen och en djupare utvärdering genomförs, framförallt av det Geostrofiska vindindexet där vindmätningar används för att söka verifiera dess giltighet.
När kortare tidsbegränsade mätperioder korrigeras till ett långtidsmedelvärde förväntas en större statistisk spridning vid kortare mätperioder, minskande med ökande mätlängd. För att undersöka detta antagande används 7 års vindmätningar som korrigeras med det Geostrofiska vindindexet. I detta simuleras ett antal hypotetiskt tänkta mätperioder av olika längd. När en mätperiod av specifik längd normalårskorrigeras minskar den statistiska spridningen kraftigt under de första 10 månaderna. En halvering av den inledande statistiska spridningen kan ses efter endast 5 månaders mätningar.
Resultaten visar att teoretiskt härledd normalårskorrigerad energiproduktion generellt är ungefär 15-20% lägre än väntat. En trolig förklaring till merparten av denna skillnad är allvarliga problem med rapporterad hindertid för vindkraftverk i den officiella statistiken. Något som gör det omöjligt att jämföra faktisk energiproduktion med teoretiskt härledd utan mer detaljerad information. Det teoretiskt härledda Geostrofiska vindindexet stämmer väl överens med vindmätningar. Ett teoretiskt förväntat förhållande där energi är proportionellt mot kuben av vindhastigheten visar sig rimligen ta hänsyn till den totala energin i vinden. En sådan energimängd kan inte tas till vara av vindkraftverk när vindhastighetsförhållandena är avsevärt högre än de normala.
Liu, Wenjie. "Estimation and bias correction of the magnitude of an abrupt level shift". Thesis, Linköpings universitet, Statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84618.
Texto completoGretton, Jeremy David. "Perceived Breadth of Bias as a Determinant of Bias Correction". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1499097376679535.
Texto completoBrandt, Oskar y Rickard Persson. "The relationship between stock price, book value and residual income: A panel error correction approach". Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-254344.
Texto completoBuendía, Rubén. "Hook Effect on Electrical Bioimpedance Spectroscopy Measurements. Analysis, Compensation and Correction". Thesis, Högskolan i Borås, Institutionen Ingenjörshögskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-19565.
Texto completoPalanisamy, Bakkiyalakshmi. "Evaluation of SWAT model - subdaily runoff prediction in Texas watersheds". Texas A&M University, 2003. http://hdl.handle.net/1969.1/5921.
Texto completoGqozo, Pamela. "Impact of oil price on tourism in South Africa: an error correction model (ECM) analysis". Thesis, University of Fort Hare, 2013. http://hdl.handle.net/10353/d1017941.
Texto completoNastansky, Andreas, Alexander Mehnert y Hans Gerhard Strohe. "A vector error correction model for the relationship between public debt and inflation in Germany". Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/5024/.
Texto completoChang, Che-Yu y 張哲豫. "Semi-Automatic Skin Color Model Correction". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/74745746512891794969.
Texto completo國立臺灣海洋大學
資訊工程學系
104
In this thesis, a novel semi-automatic skin model correction method is presented. The method improves the accuracy of skin segmentation under different lighting conditions. We use color temperature as light color, and use the complexion of user's hand as sample to estimate the color temperature of environment. Then use the color temperature of environment to correct skin color model. We also provided some testing results to proof our method improving the accuracy of skin segmentation, and to proof our method better than full-automatic methods.
WANG, ZHONG-DING y 王鐘頂. "Finite element model identification and correction using experimental modal data". Thesis, 1992. http://ndltd.ncl.edu.tw/handle/92428091860947499705.
Texto completoHsu, Chao-Chih y 許超智. "A Fuzzy CMAC Model for Color Correction". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/12882947652501509848.
Texto completo國立成功大學
資訊及電子工程研究所
83
The Albus's Cerebellar Model Articulation Controller (CMAC) has been used in many practical areas with considerable success and capable of learning nonlinear functions extremely quickly due to the local nature in weight updating. Besides, the higher -order CMAC model proposed by Stephen and David adopts B-Spline receptive field functions and a more general addressing scheme for weight retrieving, which can learn both functions and func- tion derivatives. In this thesis, we present a three-layered fuzzy CMAC network, which takes the bell-shape membership func- tions as the receptive field functions and use the centroid of area(COA) approach as the defuzzification interface. The learn- ing algorithm is based on the maximum gradient method. For the situation of insufficient and irregularly distributed training patterns, we propose a sampling method based on interpolation scheme to generate the proper training patterns. The proposed fuzzy CMAC model is basically a table look-up model in whih fuzzy weights are stored and manipulated by using fuzzy set theory. This model adaptively adjusts the weights according to sample data to approximate the nonlinear continuous functions. Finally, we take some experiments including gerneral function approximation and color correction to verify the proposed model.
Shih, Cheng-Ting y 施政廷. "Metal artifact correction using model-based images". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/gs2qa4.
Texto completo中臺科技大學
放射科學研究所
97
Computed tomography (CT) provides various diagnosis information which are beneficial and convenient for clinical medicine today. However, high-density metal implants in the CT scans induce metal artifact and compromise image quality. In this study, we proposed a model-based metal artifact correction method. First, we built a model image using k-means clustering technique and removed the errors within the clustering results by local statistics of spatial information and image inpainting. The difference between the original image and model image were then calculated, and the projection data of the original image and model image were then combined together by a weighting factor estimated from an exponential weighting function. At last, the corrected image was reconstructed using the filtered back-projection method. Four case images which form the scan results of cylindrical water phantom, pelvic, oral cavity and hip joint were used to test the correction ability of our algorithm. All of the correction results show that our algorithm can effectively removed the metal artifact arising from metal objects and significant improved the image continuity and image uniformity. Furthermore, the results of surface rendering and volume rendering of oral CT image and pelvic CT image were also recovered. We conclude that metal artifact correction method proposed in this study is useful for reducing the metal artifact.
Yang, Ru-Yi y 楊儒易. "Hotspot Guided Model-based Optical Proximity Correction". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/77796771308638049052.
Texto completo中原大學
資訊工程研究所
99
In recent years, semiconductor manufacturing process has made great progress. To avoid lithography hotspots and enhance integrated circuit (IC) yield, we can use Model-based Optical Proximity Correction (Model-based OPC) to improve image fidelity and printability. The most vexing problem is the time-consuming calculation for optical simulation of Model-based OPC, therefore we have to do some tradeoff between the execution time and the accuracy of OPC procedure. This paper proposed a Model-based OPC flow which is roughly divided into three major parts. First, a fast lithography simulation technique used to obtain the mask aerial image efficiently. Second, a scanning method used to scan the whole mask design with a partition technique. Third, determining the hotspot cost defined by ourselves for each partition region to control the convergence of Model-based OPC feedback system, this incorporates with some control factors to adjust the solution quality. With the above approaches and a well-designed data structure, our procedure can reduced the calculation time of Model-based OPC and improve the mask fidelity and printability effectively. By the experimental results, we can observe that our Model-based OPC can obtain a high-resolution solution and the procedure can be completed within the convergence being set by ourselves.
Chang, Yun-Hua y 張芸華. "The Attitude Correction Model on High Technology Products". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35418853930164420160.
Texto completo國立清華大學
科技管理研究所
93
Occasionally people attempt to correct their initial perceptions or judgments because of potentially biasing factors. For example, when people read newspapers or watch television, it is not unusually that a given endorser may endorse many different products or different brands at the same period of time. When such a situation (i.e., multiple endorsement) occurs, people may judge the advertisement differently from when the endorser just endorses one and only one product or brand. The literatures in advertisement research have emphasized on the endorser attributes, attitude toward the advertisement and correspondent strategies for practitioners, rather than on the discussion of information process under the condition when a given endorser is repeatedly exposed to audiences. On the other hand, even though the literatures in psychology have already developed solid evidences for explaining the mechanisms of attitude change, previous studies which examined attitude change and explained bias correction only used prompts to make subjects aware of those biasing factors, referred to as “enforced-correction” in this study. Nonetheless, in reality, people are less likely to perceive prompts in advertisements. That is, people in a regular consumption setting will be less likely to engender “enforced-correction” as studied in literature of social psychology. In contrast, people will be more likely to correct the perceived biases in the regular consumption setting by themselves, referred to as “self-activated-correction” process. It is proposed in this study that the correction patterns resulted form “self-activated-correction” process will be quite different from which resulted from “enforced-correction” process. In this study, a 2x2x2 (positive vs. neutral endorser/high vs. low involvement /enforced vs. self-activated-correction process) experiment will be conducted to investigate direction and degree of correction for perceived bias (i.e., multiple endorsements) in persuasion situations. The students in Sung-Shang Vocation School will be asked to express their attitudes about a product after being exposed to a magazine under condition of either high/low product involvement as well as whether being given an explicit correction instruction or not (i.e., self-activated-correction or enforced-correction). The advertisements will have same arguments for the product and feature either same or different endorser(s). That is, a subject read a persuasive message from a famous endorser (i.e., positive endorser) or unfamiliar endorser (i.e., neutral endorser) who endorses particular product (i.e., Notebook or non-Notebook) in the experiment booklet. This research is meant to find out the different correction processes.
Wang, Chun-Kun y 王俊昆. "Automatic Validation and Correction of OpenMP Tasking Model". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/35853108475132584005.
Texto completo國立中正大學
資訊工程研究所
99
Shared-memory multiprocessor architecture is becoming a mainstream trend in modern computer system,and OpenMP, Open Multi-Processing, is one of the most important programming approaches for this architecture. OpenMP supports a simple and flexible interface to develop portable and scalable parallel applications for C, C++, and Fortran programs. In addition, OpenMP tasking model proposed in OpenMP-3.0 standard allows programmers to exploit the parallelism of irregular and dynamic structures in programs. According to the design philosophy of OpenMP, programmers are supposed to analyze data dependencies, race conditions, and deadlocks, and to use correct OpenMP directives and APIs to produce a conforming program. It is more and more error-prone and difficult for programmers to correctly handle OpenMP directives, while programs are getting complicated. In this paper, we proposed an algorithm for automatic validation and correction of OpenMP tasking model. The proposed algorithm has been implemented based on the ROSE compiler infrastructure. Experimental results show that the proposed technique can successfully validate and correct the tested benchmark programs.
Noy, Dominic. "Parameter estimation of the linear phase correction model by mixed-effects models". Master's thesis, 2017. http://hdl.handle.net/1822/50021.
Texto completoThe control of human motor timing is captured by cognitive models that make assumptions about the underlying information processing mechanisms. A paradigm for its inquiry is the Sensorimotor Synchronization (SMS) task, in which an individual is required to synchronize the movements of an effector, like the finger, with repetitive appearing onsets of an oscillating external event. The Linear Phase Correction model (LPC) is a cognitive model that captures the asynchrony dynamics between the finger taps and the event onsets. It assumes cognitive processes that are modeled as independent random variables (perceptual delays, motor delays, timer intervals). There exist methods that estimate the model parameters from the asynchronies recorded in SMS tasks. However, while many natural situations show only very short synchronization periods, the previous methods require long asynchrony sequences to allow for unbiased estimations. Depending on the task, long records may be hard to obtain experimentally. Moreover, in typical SMS tasks, records are repetitively taken to reduce biases. Yet, by averaging parameter estimates from multiple observations, the existing methods do not most appropriately exploit all available information. Therefore, the present work is a new approach of parameter estimation to integrate multiple asynchrony sequences. Based on simulations from the LPC model, we first demonstrate that existing parameter estimation methods are prone to bias when the synchronization periods become shorter. Second, we present an extended Linear Model (eLM) that integrates multiple sequences within a single model and estimates the model parameters of short sequences with a clear reduction of bias. Finally, by using Mixed-E ects Models (MEM), we show that parameters can also be retrieved robustly when there is between-sequence variability of their expected values. Since such between-sequence variability is common in experimental and natural settings, we herewith propose a method that increases the applicability of the LPC model. This method is now able to reduce biases due to fatigue or attentional issues, for example, bringing an experimental control that previous methods are unable to perform.
O controlo de factores temporais que ocorrem na execução de movimentos é captado por modelos cognitivos. Estes modelos são aproximações do processamento de informação,que ocorre no sistema nervoso. Para investigar este processo é utilizada a "Sensorimotor Synchronization Task" (SMS) que consiste em sincronizar os movimentos, por exemplo, de um dedo com eventos externos repetitivos. O "Linear Phase Correction Model" (LPC) permite prever a evolução da diacronia entre o movimento e o evento externo. Este modelo inclui variáveis aleatórias independentes, tais como atrasos no processamento da informação e execução da resposta. Para se estimar os parâmetros do LPC são utilizados métodos que incluem as diacronias obtidas na SMS. Estes métodos precisam de sequências longas, no entanto o sincronismo veri fica-se durante curtos períodos de tempo. Além disso, registam-se observações múltiplas para diminuir o viés na estimativa. Contudo, recorrendo à média de múltiplas estimativas, nem toda a informação disponível é considerada. Com vista a colmatar as lacunas identi ficadas, este trabalho apresenta uma nova abordagem ao nível da estimativa dos parâmetros. Num primeiro momento, com base em simulações do LPC, demonstramos que os métodos existentes são enviesados, quando as sequências são curtas. Num segundo momento, apresentamos o "extended Linear Model" (eLM) que integra diacronias múltiplas no mesmo modelo. Por fim, usando o "Mixed-Effects Model" (MEM), mostramos que os parâmetros podem ser estimados quando os valores esperados variam entre sequências. Uma vez que tal variabilidade é frequente e observável em contexto real, o método desenvolvido neste trabalho permite maior aplicabilidade do modelo LPC e reduz o viés causado por factores relacionados com problemas de atenção e de fadiga, introduzindo um novo controlo experimental.
Fundação para a Ciência e Tecnologia (FCT) - Project UID/MAT/00013/2013
Chen, Chuan-Yi y 陳川鎰. "Study of Correction Model for Correcting Residual Capacity of Lithium-Ion Battery by Current Compensation Method". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/38dc6w.
Texto completo國立臺灣科技大學
自動化及控制研究所
107
Nowadays, the increasingly extensive application of Lithium batteries proposes a more accurate requirement for the estimation of power consumption. Coulomb counting method is the commonly used way to estimate Lithium battery power in the market, but it will become more inaccurate in actual. For this reason, many studies on compensation methods are conducted. However, the in-depth understanding finds the shortcomings are widespread. There are two most important problems. One is the compensation method after battery attenuation, which is too complicated and time-consuming to be performed by ordinary users. The other is that there is no discussion or research on the interaction between current and power charge & discharge, which will finally expand the error of power estimation.On this basis, the simple and accessible charge & discharge data is adopted to construct a set of mathematical model for state-of-charge (SOC). Meanwhile, this model is expanded to be universal for dissimilar batteries so that the new battery can establish its own mathematical model of remnant capacity by cycling charge & discharge for several times. Moreover, each cell in the new battery can even keep rectifying its mathematical model during use so as to accurately enhance battery estimation. By means of reacting in advance and constantly adjusting the model, this set of model can also match the attenuated battery to authentically enable each cell to customize its mathematical model. The model allows users to optimize charge and discharge parameters so as to slow down the battery attenuation. In this study, it is empirically verified that the complete discharge error is within 2% and the error after 50% of discharge is about 3%. This mathematical model is applicable to manufacturers using Lithium battery, such as the producers of battery, cell phone and electric vehicle, which can be further developed into software and charger, etc.