Tesi sul tema "Model correction"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Model correction.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Model correction".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Bäckström, Fredrik, e Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models". Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.

Testo completo
Abstract (sommario):

Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.

Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.

The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.

The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.

Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kokkola, N. "A double-error correction computational model of learning". Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/18838/.

Testo completo
Abstract (sommario):
In this thesis, the Double Error model, a general computational model of real-time learning is presented. It builds upon previous real-time error-correction models and assumes that associative connections form not only between stimuli and reinforcers, but between all types of stimuli in a connectionist network. The stimulus representation uses temporally-distributed elements with memory traces, and a process of expectation-based attentional modulation for both reinforcers and non-reinforcing stimuli is introduced. A modified error-correction learning rule is proposed, which incorporates both an error-term for the predicted and predicting stimulus. The static asymptote of learning familiar from other models of learning is replaced by a similarity measure between the activities of said stimuli, resulting in more temporally correlated stimulus representations forming stronger associative links. Associative retrieval based on previously formed associative links result in the model predicting mediated learning and pre-exposure effects. As a general model of learning, it accounts for phenomena predicted by extant learning models. For instance, its usage of error-correction learning produces a natural account of cue-competition effects such as blocking and overshadowing. Its elemental framework, which incorporates overlapping sets of elements to represent stimuli, leads to it predicting non-linear discriminations including biconditional discriminations and negative patterning. The observation that adding a cue to an excitatory compound stimulus leads to a lower generalization decrement as compared to removing a cue from said compound also follows from this representational assumption. The model further makes a number of unique predictions. The apparent contradiction of mediated learning in backward blocking and mediated conditioning proceeding in opposite directions is predicted through the model’s dynamic asymptote. Latent inhibition is accounted for as occurring through both learning and selective attention. The selective attention of the model likewise produces emergent effects when instantiated in the real-time dynamics of the model, predicting that the relatively best predictor of an outcome can sustain the largest amount of attention when compared to poorer predictors of said outcome. The model is evaluated theoretically, through simulations of learning experiments, and mathematically to demonstrate its generality and formal validity. Further, a simplified version of the model is contrasted against other models on a simple artificial classification task, showcasing the power of the fully-connected nature of the model, as well as its second error term in enabling the model’s performance as a classifier. Finally, numerous avenues of future work have been explored. I have completed a proof-of-concept deep recurrent network extension of the model, instantiated with reference to machine learning theory, and applied the second error term of the model to modulating backpropagation in time of a vanilla RNN. Both the former and latter were applied to a natural language processing task.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Bulygina, Nataliya. "Model Structure Estimation and Correction Through Data Assimilation". Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195345.

Testo completo
Abstract (sommario):
The main philosophy underlying this research is that a model should constitute a representation of both what we know and what we do not know about the structure and behavior of a system. In other words it should summarize, as far as possible, both our degree of certainty and degree of uncertainty, so that it facilitates statements about prediction uncertainty arising from model structural uncertainty. Based on this philosophy, the following issues were explored in the dissertation: Identification of a hydrologic system model based on assumption about perceptual and conceptual models structure only, without strong additional assumptions about its mathematical structure Development of a novel data assimilation method for extraction of mathematical relationships between modeled variables using a Bayesian probabilistic framework as an alternative to up-scaling of governing equations Evaluation of the uncertainty in predicted system response arising from three uncertainty types: o uncertainty caused by initial conditions, o uncertainty caused by inputs, o uncertainty caused by mathematical structure Merging of theory and data to identify a system as an alternative to parameter calibration and state-updating approaches Possibility of correcting existing models and including descriptions of uncertainty about their mapping relationships using the proposed method Investigation of a simple hydrological conceptual mass balance model with two-dimensional input, one-dimensional state and two-dimensional output at watershed scale and different temporal scales using the method
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Hu, Zhongbo. "Atmospheric artifacts correction for InSAR using empirical model and numerical weather prediction models". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668264.

Testo completo
Abstract (sommario):
lnSAR has been proved its unprecedented ability and merits of monitoring ground deformation on large scale with centimeter to millimeter scale accuracy. However, several factors affect the reliability and accuracy of its applications. Among them, atmospheric artifacts due to spatial and temporal variations of atmosphere state often pose noise to interferograms. Therefore, atmospheric artifacts m itigalion remains one of the biggest challenges to be addressed in the In SAR community. State-of-the-art research works have revealed atmospheric artifacts can be partially compensated with empirical models, temporal-spatial filtering approach in lnSAR time series, pointwise GPS zenith path delay and numerical weather prediction models. In this thesis, firstly, we further develop a covariance weighted linear empirical model correction method. Secondly, a realistic LOS direction integration approach based on global reanalysis data is employed and comprehensively compared with the conventional method that integrates along zenith direction. Finally, the realistic integration method is applied to local WRF numerical forecast model data. l'vbreover, detailed comparisons between different global reanalysis data and local WRF model are assessed. In terms of empirical models correcting methods, many publications have studied correcting stratified tropospheric phase delay by assuming a linear model between them and topography. However, most of these studies ha\19 not considered the effect of turbulent atmospheric artefacts when adjusting the linear model to data. In this thesis, an improved technique that minimizes the influence of turbulent atmosphere in the model adjustment has been presented. In the proposed algorithm, the model is adjusted to the phase differences of pixels instead of using the unwrapped phase of each pixel. In addition, the different phase differences are weighted as a function of its APS covariance estimated from an empirical variogram to reduce in the model adjustment the impact of pixel pairs with significant turbulent atmosphere. The performance of the proposed method has been validated with both simulated and real Sentinel-1 SAR data in Tenerife island, Spain. Considering methods using meteorological observations to mitigate APS, an accurate realistic com puling strategy utilizing global atmospheric reanalysis data has been implemented. With the approach, the realistic LOS path along satellite and the monitored points is considered, rather than converting from zenith path delay. Com pared with zenith delay based method, the biggest advantage is that it can avoid errors caused by anisotropic atmospheric behaviour. The accurate integration method is validated with Sentinel-1 data in three test sites: Tenerife island, Spain, Almeria, Spain and Crete island, Greece. Compared to conventional zenith method, the realistic integration method shows great improvement. A variety of global reanalysis data are available from different weather forecasting organizations, such as ERA-Interim, ERAS, MERRA2. In this study, the realistic integration mitigation method is assessed on these different reanalysis data. The results show that these data are feasible to mitigate APS to some extent in most cases. The assessment also demonstrates that the ERAS performs the best statistically, compared to other global reanalysis data. l'vbreover, as local numerical weather forecast models have the ability to predict high spatial resolution atmospheric parameters, by using which, it has the potential to achieve APS mitigation. In this thesis, the realistic integration method is also employed on the local WRF model data in Tenerife and Almeria test s ites. However, it turns out that the WRF model performs worse than the original global reanalysis data.
Las técnicas lnSAR han demostrado su capacidad sin precedentes y méritos para el monitoreo de la deformaci6n del suelo a gran escala con una precisión centimétrica o incluso milimétrica. Sin embargo, varios factores afectan la fiabilidad y precisión de sus aplicaciones. Entre ellos, los artefactos atmosféricos debidos a variaciones espaciales y temporales del estado de la atm6sfera a menudo añaden ruido a los interferogramas. Por lo tanto, la mitigación de los artefactos atmosféricos sigue siendo uno de los mayores desafíos a abordar en la comunidad lnSAR. Los trabajos de investigaci6n de vanguardia han revelado que los artefactos atmosféricos se pueden compensar parcialmente con modelos empíricos, enfoque de filtrado temporal-espacial en series temporales lnSAR, retardo puntual del camino cenital con GPS y modelos numéricos de predicción meteorológica. En esta tesis, en primer lugar, desarrollamos un método de corrección de modelo empírico lineal ponderado por covarianza. En segundo lugar, se emplea un enfoque realista de integracion de dirección LOS basado en datos de reanálisis global y se compara exhaustivamente con el método convencional que se integra a lo largo de la dirección cenital. Finalmente, el método de integraci6n realista se aplica a los datos del modelo de pronóstico numérico WRF local. Ademas, se evalúan las comparaciones detalladas entre diferentes datos de reanálisis global y el modelo WRF local. En términos de métodos de corrección con modelos empíricos, muchas publicaciones han estudiado la corrección del retraso estratificado de la fase troposférica asumiendo un modelo lineal entre ellos y la topografía. Sin embargo, la mayoría de estos estudios no han considerado el efecto de los artefactos atmosféricos turbulentos al ajustar el modelo lineal a los datos. En esta tesis, se ha presentado una técnica mejorada que minimiza la influencia de la atm6sfera turbulenta en el ajuste del modelo. En el algoritmo propuesto, el modelo se ajusta a las diferencias de fase de los pixeles en lugar de utilizar la fase sin desenrollar de cada pixel. Además, las diferentes diferencias de fase se ponderan en función de su covarianza APS estimada a partir de un variograma empírico para reducir en el ajuste del modelo el impacto de los pares de pixeles con una atm6sfera turbulenta significativa. El rendimiento del método propuesto ha sido validado con datos SAR Sentinel-1 simulados y reales en la isla de Tenerife, España. Teniendo en cuenta los métodos que utilizan observaciones meteorológicas para mitigar APS, se ha implementado una estrategia de computación realista y precisa que utiliza datos de reanálisis atmosférico global. Con el enfoque, se considera el camino realista de LOS a lo largo del satélite y los puntos monitoreados, en lugar de convertirlos desde el retardo de la ruta cenital. En comparación con el método basado en la demora cenital, la mayor ventaja es que puede evitar errores causados por el comportamiento atmosférico anisotrópico. El método de integración preciso se valida con los datos de Sentinel-1 en tres sitios de prueba: la isla de Tenerife, España, Almería, España y la isla de Creta, Grecia. En comparación con el método cenital convencional, el método de integración realista muestra una gran mejora.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Pointoin, Barry William. "Model-based randoms correction for 3D positron emission tomography". Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31046.

Testo completo
Abstract (sommario):
Random coincidences (randoms) are frequently a major source of image degradation and quantitative inaccuracies in positron emission tomography. Randoms occurring in the true coincidence window are commonly corrected for by subtracting the randoms measured in a delayed coincidence window. This requires additional processing demands on the tomograph and increases the noise in the corrected data. A less noisy randoms estimate may be obtained by measuring individual detector singles rates, but few tomographs have this capability. This work describes a new randoms correction method that uses the singles rates from an analytic calculation, rather than measurements. This is a logical and novel extension of the model-based methods presently used for scatter correction. The singles calculation uses a set of sample points randomly generated within the preliminary reconstructed radioactivity image. The contribution of the activity at each point to the singles rate in every detector is calculated using a single photon detection model, producing an estimate o f the singles distribution. This is scaled to the measured global singles rate and used to calculate the randoms distribution which is subtracted from the measured image data. This method was tested for a MicroPET R4 tomograph. Measured and calculated randoms distributions were compared using count profiles and quantitative figures of merit for a set of phantom and animal studies. Reconstructed images, corrected with measured and calculated randoms, were also analysed. The calculation reproduced the measured randoms rates to within ≤ 1.4% for all realistic studies. The calculated randoms distributions showed excellent agreement with the measured, except that the calculated sinograms were smooth. Images corrected with both methods showed no significant differences due to biases. However, in the situations tested, no significant difference in the noise level o f the reconstructed images was detected due to the low randoms fractions of the acquired data. The model-based method of randoms correction uses only the measured image data and the global singles rate to produce smooth and accurate random distributions and therefore has much lower demands on the tomograph than other techniques. It is also expected to contribute to noise reduction in situations involving high randoms fraction.
Science, Faculty of
Physics and Astronomy, Department of
Graduate
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Molin, Simon. "House Price Dynamics in Sweden : Vector error-correction model". Thesis, Umeå universitet, Nationalekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172367.

Testo completo
Abstract (sommario):
Movements in house prices can have effects on individuals, financial markets, and the whole economy. After the rapid increase in house prices worldwide since the mid-1990s and after the financial crisis in 2008, many studies have investigated house price dynamics. Furthermore, real house prices in Sweden have increased by more than 200 % since the mid-1990s up until today. This study takes a closer look at the fundamental determinants of house prices to investigate both the long- and short-run dynamics of Swedish house prices. The method of use includes a vector error-correction model, which exposes both long- and short-run dynamics of house prices. The long-run results show that Swedish house prices are currently not overvalued. Furthermore, in the short-run, the results suggest that house prices adjust to their equilibrium level with 7,9 % in each quarter.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Zechman, Emily Michelle. "Improving Predictability of Simulation Models using Evolutionary Computation-Based Methods for Model Error Correction". NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-08082005-105133/.

Testo completo
Abstract (sommario):
Simulation models are important tools for managing water resources systems. An optimization method coupled with a simulation model can be used to identify effective decisions to efficiently manage a system. The value of a model in decision-making is degraded when that model is not able to accurately predict system response for new management decisions. Typically, calibration is used to improve the predictability of models to match more closely the system observations. Calibration is limited as it can only correct parameter error in a model. Models may also contain structural errors that arise from mis-specification of model equations. This research develops and presents a new model error correction procedure (MECP) to improve the predictive capabilities of a simulation model. MECP is able to simultaneously correct parameter error and structural error through the identification of suitable parameter values and a function to correct misspecifications in model equations. An evolutionary computation (EC)-based implementation of MECP builds upon and extends existing evolutionary algorithms to simultaneously conduct numeric and symbolic searches for the parameter values and the function, respectively. Non-uniqueness is an inherent issue in such system identification problems. One approach for addressing non-uniqueness is through the generation of a set of alternative solutions. EC-based techniques to generate alternative solutions for numeric and symbolic search problems are not readily available. New EC-based methods to generate alternatives for numeric and symbolic search problems are developed and investigated in this research. The alternatives generation procedures are then coupled with the model error correction procedure to improve the predictive capability of simulation models and to address the non-uniqueness issue. The methods developed in this research are tested and demonstrated for an array of illustrative applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Kurachi, Masafumi, Robert Shrock e Koichi Yamawaki. "Z boson propagator correction in technicolor theories with extended technicolor effects included". American Physical Society, 2007. http://hdl.handle.net/2237/11301.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Maurer, Dustin. "Comparison of background correction in tiling arrays and a spatial model". Kansas State University, 2011. http://hdl.handle.net/2097/12130.

Testo completo
Abstract (sommario):
Master of Science
Department of Statistics
Susan J. Brown
Haiyan Wang
DNA hybridization microarray technologies have made it possible to gain an unbiased perspective of whole genome transcriptional activity on such a scale that is increasing more and more rapidly by the day. However, due to biologically irrelevant bias introduced by the experimental process and the machinery involved, correction methods are needed to restore the data to its true biologically meaningful state. Therefore, it is important that the algorithms developed to remove any sort of technical biases are accurate and robust. This report explores the concept of background correction in microarrays by using a real data set of five replicates of whole genome tiling arrays hybridized with genetic material from Tribolium castaneum. It reviews the literature surrounding such correction techniques and explores some of the more traditional methods through implementation on the data set. Finally, it introduces an alternative approach, implements it, and compares it to the traditional approaches for the correction of such errors.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Leach, Mark Daniel. "A discrete, stochastic model and correction method for bacterial source tracking". Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Spring2007/m_leach_050207.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Granholm, George Richard 1976. "Near-real time atmospheric density model correction using space catalog data". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/44899.

Testo completo
Abstract (sommario):
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2000.
Includes bibliographical references (p. 179-184).
Several theories have been presented in regard to creating a neutral density model that is corrected or calibrated in near-real time using data from space catalogs. These theories are usually limited to a small number of frequently tracked "calibration satellites" about which information such as mass and crosssectional area is known very accurately. This work, however, attempts to validate a methodology by which drag information from all available low-altitude space objects is used to update any given density model on a comprehensive basis. The basic update and prediction algorithms and a technique to estimate true ballistic factors are derived in detail. A full simulation capability is independently verified. The process is initially demonstrated using simulated range, azimuth, and elevation observations so that issues such as required number and types of calibration satellites, density of observations, and susceptibility to atmospheric conditions can be examined. Methods of forecasting the density correction models are also validated under different atmospheric conditions.
by George Richard Granholm.
S.M.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Mendoza, Juan Pablo. "Regions of Inaccurate Modeling for Robot Anomaly Detection and Model Correction". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1059.

Testo completo
Abstract (sommario):
To make intelligent decisions, robots often use models of the stochastic effects of their actions on the world. Unfortunately, in complex environments, it is often infeasible to create models that are accurate in every plausible situation, which can lead to suboptimal performance. This thesis enables robots to reason about model inaccuracies to improve their performance. The thesis focuses on model inaccuracies that are subtle –i.e., they cannot be detected from a single observation– and context-dependent –i.e., they affect particular regions of the robot’s state-action space. Furthermore, this work enables robots to react to model inaccuracies from sparse execution data. Our approach consists of enabling robots to explicitly reason about parametric Regions of Inaccurate Modeling (RIMs) in their state-action space. We enable robots to detect these RIMs from sparse execution data, to correct their models given these detections, and to plan accounting for uncertainty with respect to these RIMs. To detect and correct RIMs, we first develop algorithms that work effectively online in low-dimensional domains. An execution monitor compares outcome predictions made by a stochastic nominal model, to outcome observations gathered during execution. The results of these comparisons are then used to detect RIMs of stateaction space in which outcome observations deviate statistically-significantly from the nominal model. Our detection algorithm is based on an explicit search for the parametric region of state-action space that maximizes an anomaly measure; once the maximum anomaly region is found, a statistical test determines whether the outcomes deviate significantly from the model. To correct detected RIMs, our algorithms apply corrections on top of the nominal model, only in the detected RIMs, treating them as newly-discovered behavioral modes of the domain. To extend this approach to high-dimensional domains, we develop a search-based Feature Selection algorithm. Based on the assumption that RIMs are intrinsically low-dimensional, but embedded in a high-dimensional space, this best-first search starts from the zero-dimensional projection of all the execution data, and searches by adding the single most promising feature to the boundary of the search tree. Our lowdimensional algorithms can then be applied to the resulting low-dimensional space to find RIMs in the robot’s planning model. We also enable robots to make plans that account for their uncertainty about the accuracy of their models. To do this, we first enable robots to represent distributions over possible RIMs in their planning models. With this representation, robots can plan accounting for the probability that their models are inaccurate in particular points in state-action state. Using this approach, we enable robots to effectively trade off actions that are known to produce reward with those that refine their models, potentially leading to higher future reward. We evaluate our approach on various complex robot domains. Our approach enables the CoBot mobile service robots to autonomously detect inaccuracies in their motion models, despite their high-dimensional state-action space: the CoBots detect that they are not moving correctly in particular areas of the building, and that their wheels are starting to fail when making turns. Our approach enables the CMDragons soccer robots to improve their passing and shooting models online in the presence of opponents with unknown weaknesses and strengths. Finally, our approach enables a NASA spacecraft landing simulator to detect subtle anomalies, unknown to us beforehand, in their streams of high-dimensional sensor-output and actuator-input data.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Watkins, Yijing Zhang. "Image Compression and Channel Error Correction using Neurally-Inspired Network Models". OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1529.

Testo completo
Abstract (sommario):
Everyday an enormous amount of information is stored, processed and transmitted digitally around the world. Neurally-inspired compression models have been rapidly developed and researched as a solution to image processing tasks and channel error correction control. This dissertation presents a deep neural network (DNN) for gray high-resolution image compression and a fault-tolerant transmission system with channel error-correction capabilities. A feed-forward DNN implemented with the Levenberg-Marguardt learning algorithm is proposed and implemented for image compression. I demonstrate experimentally that the DNN not only provides better quality reconstructed images but also requires less computational capacity as compared to DCT Zonal coding, DCT Threshold coding, Set Partitioning in Hierarchical Trees (SPIHT) and Gaussian Pyramid. An artificial neural network (ANN) with improved channel error-correction rate is also proposed. The experimental results indicate that the implemented artificial neural network provides a superior error-correction ability by transmitting binary images over the noisy channel using Hamming and Repeat-Accumulate coding. Meanwhile, the network’s storage requirement is 64 times less than the Hamming coding and 62 times less than the Repeat-Accumulate coding. Thumbnail images contain higher frequencies and much less redundancy, which makes them more difficult to compress compared to high-resolution images. Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, I observed that thumbnail images compressed at a 2:1 ratio through bottleneck autoencoders often exhibit subjectively low visual quality. In this dissertation, I compared bottleneck autoencoders with two sparse coding approaches. Either 50\% of the pixels are randomly removed or every other pixel is removed, each achieving a 2:1 compression ratio. In the subsequent decompression step, a sparse inference algorithm is used to in-paint the missing the pixel values. Compared to bottleneck autoencoders, I observed that sparse coding with a random dropout mask yields decompressed images that are superior based on subjective human perception yet inferior according to pixel-wise metrics of reconstruction quality, such as PSNR and SSIM. With a regular checkerboard mask, decompressed images were superior as assessed by both subjective and pixel-wise measures. I hypothesized that alternative feature-based measures of reconstruction quality would better support my subjective observations. To test this hypothesis, I fed thumbnail images processed using either bottleneck autoencoder or sparse coding using either checkerboard or random masks into a Deep Convolutional Neural Network (DCNN) classifier. Consistent, with my subjective observations, I discovered that sparse coding with checkerboard and random masks support on average 2.7\% and 1.6\% higher classification accuracy and 18.06\% and 3.74\% lower feature perceptual loss compared to bottleneck autoencoders, implying that sparse coding preserves more feature-based information. The optic nerve transmits visual information to the brain as trains of discrete events, a low-power, low-bandwidth communication channel also exploited by silicon retina cameras. Extracting high-fidelity visual input from retinal event trains is thus a key challenge for both computational neuroscience and neuromorphic engineering. % Here, we investigate whether sparse coding can enable the reconstruction of high-fidelity images and video from retinal event trains. Our approach is analogous to compressive sensing, in which only a random subset of pixels are transmitted and the missing information is estimated via inference. We employed a variant of the Locally Competitive Algorithm to infer sparse representations from retinal event trains, using a dictionary of convolutional features optimized via stochastic gradient descent and trained in an unsupervised manner using a local Hebbian learning rule with momentum. Static images, drawn from the CIFAR10 dataset, were passed to the input layer of an anatomically realistic retinal model and encoded as arrays of output spike trains arising from separate layers of integrate-and-fire neurons representing ON and OFF retinal ganglion cells. The spikes from each model ganglion cell were summed over a 32 msec time window, yielding a noisy rate-coded image. Analogous to how the primary visual cortex is postulated to infer features from noisy spike trains in the optic nerve, we inferred a higher-fidelity sparse reconstruction from the noisy rate-coded image using a convolutional dictionary trained on the original CIFAR10 database. Using a similar approach, we analyzed the asynchronous event trains from a silicon retina camera produced by self-motion through a laboratory environment. By training a dictionary of convolutional spatiotemporal features for simultaneously reconstructing differences of video frames (recorded at 22HZ and 5.56Hz) as well as discrete events generated by the silicon retina (binned at 484Hz and 278Hz), we were able to estimate high frame rate video from a low-power, low-bandwidth silicon retina camera.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Cirineo, Tony, e Bob Troublefield. "STANDARD INTEROPERABLE DATALINK SYSTEM, ENGINEERING DEVELOPMENT MODEL". International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608398.

Testo completo
Abstract (sommario):
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
This paper describes an Engineering Development Model (EDM) for the Standard Interoperable Datalink System (SIDS). This EDM represents an attempt to design and build a programmable system that can be used to test and evaluate various aspects of a modern digital datalink. First, an investigation was started of commercial wireless components and standards that could be used to construct the SIDS datalink. This investigation lead to the construction of an engineering developmental model. This model presently consists of wire wrap and prototype circuits that implement many aspects of a modern digital datalink.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Silber, Frank. "Makroökonometrische Anpassungsanalyse im Vector-Error-Correction-Model (VECM) : Untersuchungen an ausgewählten Arbeitsmärkten /". Frankfurt am Main: Lang, 2003. http://www.gbv.de/dms/zbw/362076561.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Flouri, Dimitra. "Tracer-kinetic model-driven motion correction with application to renal DCE-MRI". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/16485/.

Testo completo
Abstract (sommario):
A major challenge of the image registration in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is related to the image contrast variations caused by the contrast agent passage. Tracer-kinetic model-driven motion correction is an attractive solution for DCE-MRI, but previous studies only use the 3-parameter modified Tofts model. Firstly, a generalisation based on a 4-parameter 2-compartment tracer-kinetic model is presented. A practical limitation of these models is the need for non-linear least-squares (NLLS) fitting. This is prohibitively slow for image-wide parameter estimations, and is biased by the choice of initial values. To overcome this limitation, a fast linear least-squares (LLS) method to fit the two-compartment exchange and -filtration models (2CFM) to the data is introduced. Simulations of normal and pathological data were used to evaluate calculation time, accuracy and precision of the LLS against the NLLS method. Results show that the LLS method leads to a significant reduction in the calculation times. Secondly, a novel tracer-kinetic model-driven motion correction algorithm is introduced which uses a 4-parameter 2-compartment model to tackle the problem of image registration in 2D renal DCE-MRI. The core architecture of the algorithm can briefly described as follows: the 2CFM is linearly fitted pixel-by-pixel and the model fit is used as target for registration; then a free-form deformation model is used for pairwise co-registration of source and target images at the same time point. Another challenge that has been addressed is the computational complexity of non-rigid registration algorithms by precomputing steps to remove redundant calculations. Results in 5 subjects and simulated phantoms show that the algorithm is computationally efficient and improves alignment of the data. The proposed registration algorithm is then translated to 3D renal dynamic MR data. Translation to 3D is however challenging due to ghosting artefacts caused by within-frame breathing motion. Results in 8 patients show that the algorithm effectively removes between-frame breathing motion despite significant within-frame artefacts. Finally, the effect of motion correction on the clinical utility has been examined. Quantitative evaluation of single-kidney glomerular filtration rate derived from DCE-MRI against reference measurements shows a reduction of the bias, but precision is limited by within-frame artefacts. The suggested registration algorithm with a 4-parameter model is shown to be a computational efficient approach which effectively removes between-frame motion in a series of 2D and 3D renal DCE-MRI data.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Rosa, Cristian. "Vérification des performances et de la correction des systèmes distribués". Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10113/document.

Testo completo
Abstract (sommario):
Les systèmes distribués sont au coeur des technologies de l'information.Il est devenu classique de s'appuyer sur multiples unités distribuées pour améliorer la performance d'une application, la tolérance aux pannes, ou pour traiter problèmes dépassant les capacités d'une seule unité de traitement. La conception d'algorithmes adaptés au contexte distribué est particulièrement difficile en raison de l'asynchronisme et du non-déterminisme qui caractérisent ces systèmes. La simulation offre la possibilité d'étudier les performances des applications distribuées sans la complexité et le coût des plates-formes d'exécution réelles. Par ailleurs, le model checking permet d'évaluer la correction de ces systèmes de manière entièrement automatique. Dans cette thèse, nous explorons l'idée d'intégrer au sein d'un même outil un model checker et un simulateur de systèmes distribués. Nous souhaitons ainsi pouvoir évaluer la performance et la correction des applications distribuées. Pour faire face au problème de l'explosion combinatoire des états, nous présentons un algorithme de réduction dynamique par ordre partiel (DPOR), qui effectue une exploration basée sur un ensemble réduit de primitives de réseau. Cette approche permet de vérifier les programmes écrits avec n'importe laquelle des interfaces de communication proposées par le simulateur. Nous avons pour cela développé une spécification formelle complète de la sémantique de ces primitives réseau qui permet de raisonner sur l'indépendance des actions de communication nécessaire à la DPOR. Nous montrons au travers de résultats expérimentaux que notre approche est capable de traiter des programmes C non triviaux et non modifiés, écrits pour le simulateur SimGrid. Par ailleurs, nous proposons une solution au problème du passage à l'échelle des simulations limitées pour le CPU, ce qui permet d'envisager la simulation d'applications pair-à-pair comportant plusieurs millions de noeuds. Contrairement aux approches classiques de parallélisation, nous proposons une parallélisation des étapes internes de la simulation, tout en gardant l'ensemble du processus séquentiel. Nous présentons une analyse de la complexité de l'algorithme de simulation parallèle, et nous la comparons à l'algorithme classique séquentiel pour obtenir un critère qui caractérise les situations où un gain de performances peut être attendu avec notre approche. Un résultat important est l'observation de la relation entre la précision numérique des modèles utilisés pour simuler les ressources matérielles, avec le degré potentiel de parallélisation atteignables avec cette approche. Nous présentons plusieurs cas d'étude bénéficiant de la simulation parallèle, et nous détaillons les résultats d'une simulation à une échelle sans précédent du protocole pair-à-pair Chord avec deux millions de noeuds, exécutée sur une seule machine avec un modèle précis du réseau
Distributed systems are in the mainstream of information technology. It has become standard to rely on multiple distributed units to improve the performance of the application, help tolerate component failures, or handle problems too large to fit in a single processing unit. The design of algorithms adapted to the distributed context is particularly difficult due to the asynchrony and the nondeterminism that characterize distributed systems. Simulation offers the ability to study the performance of distributed applications without the complexity and cost of the real execution platforms. On the other hand, model checking allows to assess the correctness of such systems in a fully automatic manner. In this thesis, we explore the idea of integrating a model checker with a simulator for distributed systems in a single framework to gain performance and correctness assessment capabilities. To deal with the state explosion problem, we present a dynamic partial order reduction algorithm that performs the exploration based on a reduced set of networking primitives, that allows to verify programs written for any of the communication APIs offered by the simulator. This is only possible after the development of a full formal specification with the semantics of these networking primitives, that allows to reason about the independency of the communication actions as required by the DPOR algorithm. We show through experimental results that our approach is capable of dealing with non trivial unmodified C programs written for the SimGrid simulator. Moreover, we propose a solution to the problem of scalability for CPU bound simulations, envisioning the simulation of Peer-to-Peer applications with millions of participating nodes. Contrary to classical parallelization approaches, we propose parallelizing some internal steps of the simulation, while keeping the whole process sequential. We present a complexity analysis of the simulation algorithm, and we compare it to the classical sequential algorithm to obtain a criteria that describes in what situations a speed up can be expected. An important result is the observation of the relation between the precision of the models used to simulate the hardware resources, and the potential degree of parallelization attainable with this approach. We present several case studies that benefit from the parallel simulation, and we show the results of a simulation at unprecedented scale of the Chord Peer-to-Peer protocol with two millions nodes executed in a single machine
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Ercolani, Marco G. "Price uncertainty, investment and consumption". Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Giroud, Xavier. "A Markov-Switching Equilibrium Correction Model for Intraday Futures and Stock Index Returns". St. Gallen, 2004. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/99630345001/$FILE/99630345001.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Lindgren, Jonathan. "Modeling credit risk for an SME loan portfolio: An Error Correction Model approach". Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136176.

Testo completo
Abstract (sommario):
Sedan den globala finanskrisen 2008 har flera stora regelverk införts för att säkerställa att banker hanterar risker på sunt sätt. Bland dessa regelverk är Basel II som infört kapitalkrav för kreditrisk som baseras på Sannolikhet för Fallissemang och Förlust Givet Fallissemang. Basel II Advanced Internal-Based Approach ger banker möjligheten att skatta dessa riskmått för enskilda portföljer och göra interna kreditriskvärderingar. I överensstämmelse med Advanced Internal-Based-rating undersöker denna uppsats användningen av en Error Correction Model för modellering av Sannolikhet för Fallissemang. En modell som visat sin styrka inom stresstestning. Vidare implementeras en funktion för Förlust Givet Fallissemang som binder samman Sannolikhet för Fallissemang och Förlust Givet Fallissemang med systematisk risk. Error Correction Modellen modellerar Sannolikhet för Fallissemang av en SME-portfölj från en av de "fyra stora" bankerna i Sverige. Modellen utvärderas och stresstestas med Europeiska Bankmyndighetens  stresstestscenario 2016  och analyseras, med lovande resultat.
Since the global financial crisis of 2008, several big regulations have been implemented to assure that banks follow sound risk management. Among these are the Basel II Accords that implement capital requirements for credit risk. The core measures of credit risk evaluation are the Probability of Default and Loss Given Default. The Basel II Advanced Internal-Based-Rating Approach allows banks to model these measures for individual portfolios and make their own evaluations. This thesis, in compliance with the Advanced Internal-Based-rating approach, evaluates the use of an Error Correction Model when modeling the Probability of Default. A model proven to be strong in stress testing. Furthermore, a Loss Given Default function is implemented that ties Probability of Default and Loss Given Default to systematic risk. The Error Correction Model is implemented on an SME portfolio from one of the "big four" banks in Sweden. The model is evaluated and stress tested with the European Banking Authority's 2016 stress test scenario and analyzed, with promising results.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Santana, Amarilio Luiz de. "Forecasts for collection of VAT in CearÃ: a model analysis with error correction". Universidade Federal do CearÃ, 2009. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4340.

Testo completo
Abstract (sommario):
nÃo hÃ
This research aims to offer managers of the State of Cearà a choice of tool to perform estimates of the monthly tax collection Movement of Goods and Services (ICMS) through econometric model consistent with a good predictive power. For that, it was used models of bug fixes, ECM, and the vector cointegrante was estimated by DOLS (Dynamic Ordinary Least Squares). The forecasts generated by the research confirms the ability of ECM for generation of prediction, due to the small error margin. In addition, comparisons were made with the forecasts made by SEFAZ-CE and the de Rocha Neto (2008) opportunity for ARIMA models, thus we can say that the model used here is more accurate than the method used by the Secretary of Finance and the ARIMA to perform estimates of monthly collections of ICMS.
Esta pesquisa tem como objetivo oferecer aos gestores do Estado do Cearà uma opÃÃo de ferramenta para realizar previsÃo de arrecadaÃÃo mensal do Imposto sobre CirculaÃÃo de Mercadorias e ServiÃos (ICMS), por meio de um modelo economÃtrico consistente e com um bom poder preditivo. Para isso, foram utilizados modelos de correÃÃes de erros, MCE, sendo que o vetor cointegrante foi estimado por DOLS (Dynamic Ordinary Least Squares). As previsÃes geradas pela pesquisa confirmam a capacidade do MCE para geraÃÃo de previsÃo, devido à pequena margem de erro. AlÃm disso, foram feitas comparaÃÃes com as previsÃes realizadas pela SEFAZ-CE e com as de Rocha Neto (2008) ensejadas por modelos ARIMA, deste modo, pode-se dizer que o modelo empregado aqui à mais acurado do que o mÃtodo utilizado pela SecretÃria da Fazenda e do que o ARIMA para realizar previsÃo de arrecadaÃÃo mensal de ICMS.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Mazzocco, Philip James. "Moderators of the effects of mental imagery on persuasion the cognitive resources model and the imagery correction model /". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1127050519.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xvi, 251 p.; also includes graphics. Includes bibliographical references (p. 157-174). Available online via OhioLINK's ETD Center
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Balucan, Phillip James 1977. "Model reduction of a set of elastic, nested gimbals by component mode selection criteria and static correction modes". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/17520.

Testo completo
Abstract (sommario):
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.
Includes bibliographical references (p. 112-113).
Model reduction techniques provide for a computationally inexpensive method for solving elastic dynamic problems with complex structures. The elastic nested gimbal problem is a problem which requires model reduction techniques as a means to reduce the dynamic equations. This is done using two methods: one technique employs mode ranking criteria to select modes which influence the dynamics of the problem the most. The second involves the use of static correction modes along with vibration modes to simulate the dynamics of this nested gimbal model. A model of the structure is described in terms of a lumped-parameter finite element model. This mathematical model of the physical system serves as the ha.sis for developing model reduction techniques for the nested gimbal problem. A truth model based on given initial conditions is used to compare the accuracy of the model reduced problem. A number of model reduction theories are described and applied to the gimbal simulation. The equations for the mode ranking techniques and the static and vibration mode analysis are developed as well as a quantitative error measure. Comparisons are made with the truth model using the mode ranking criteria base on the momentum coefficients and the frequency cutoff criteria. Test cases are also run using the static correction modes with vibration modes and static correction modes with the ranked vibration modes using momentum coefficients. The use of various static modes is discussed during the implementation of the static correction mode method. Applying the model reduction theories to a set of elastic, nested gimbals, the mode ranking criteria provides better results based on the error measure than the frequency cutoff criteria when the simulation is run using less than twenty-five modes. Using static modes along with ranked modes to represent the elastic dynamics of the problem does not provide better results than using the unranked vibration modes with the static modes. Modeling the dynamics using static correction modes with the unranked vibration modes provides the best results while using the lea.st number of modes. It is advantageous to take into account the given conditions applied to the system when reducing the model of a complex dynamic problem.
by Phillip James Balucan.
S.M.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Lee, Shiyoung. "Effects of Input Power Factor Correction on Variable Speed Drive Systems". Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/26493.

Testo completo
Abstract (sommario):
The use of variable speed drive (VSD) systems in the appliance industry is growing due to emerging high volume of fractional horsepower VSD applications. Almost all of the appliance VSDs have no input power factor correction (PFC) circuits. This results in harmonic pollution of the utility supply which could be avoided. The impact of the PFC circuit in the overall drive system efficiency, harmonic content, magnitude of the system input current and input power factor is particularly addressed in this dissertation along with the development of analytical methods applicable to the steady-state analysis of input power factor corrected VSD systems. Three different types of motors - the switched reluctance motor (SRM), permanent magnet brushless dc motor (PMBDC) and dc motor (DCM) are employed in this study. The C-dump converter topology, a single switch per phase converter, is adopted for the prototype SRM- and PMBDC-based VSD systems. The conventional full-bridge converter is used for DCM-based VSD systems. Four-quadrant controllers, utilizing PI speed and current control loops for the PMBDC- and DCM-based VSD system, are developed and their design results are verified with experiment and simulation. A single-quadrant controller with a PI speed feedback loop is employed for the SRM-based VSD system. The analysis of each type of VSD system includes development of loss models and establishment of proper operational modes. The magnitude of the input current harmonic spectra is measured and compared with and without a front-end PFC converter. One electromagnetic compatibility (EMC) standard, IEC 1000-3-2 which describes the limitation on harmonic current emission is modified for 120V ac system. This modified standard is utilized as the reference to evaluate the measured input current harmonics. The magnitude of input current harmonics for a VSD system are greatly reduced with PFC preregulators. While the input PFC circuit draws a near sinusoidal current from an ac source, it lowers the overall VSD system efficiency and increases cost of the overall system.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Kim, Y. S., e R. Eng. "Estimation of Tec and Range of EMP Source Using an Improved Ionospheric Correction Model". International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/611957.

Testo completo
Abstract (sommario):
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
An improved ionospheric delay correction model for a transionospheric electromagnetic pulse (EMP) is used for estimating the total-electron-content (TEC) profile of the path and accurate ranging of the EMP source. For a known pair of time of arrival (TOA) measurements at two frequency channels, the ionospheric TEC information is estimated using a simple numerical technique. This TEC information is then used for computing ionospheric group delay and pulse broadening effect correction to determine the free space range. The model prediction is compared with the experimental test results. The study results show that the model predictions are in good agreement with the test results.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Yerrabolu, Pavan. "Correction model based ANN modeling approach for the estimation of Radon concentrations in Ohio". University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1341604941.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Tunehed, Per. "Is the Swedish housing market overvalued? : An analysis using a Vector error correction model". Thesis, Umeå universitet, Nationalekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185129.

Testo completo
Abstract (sommario):
This thesis attempts to answer if a bubble is growing on the Swedish housing market. This is done by assessing the extent to which supply and demand – represented by fundamentals – can explain the rise on the Swedish housing market. Empirically, this is done by estimating a Vector error correction model using quarterly data stretching from Q1 2000 to Q4 2019. The model uses house prices as its dependent variable and disposable income, interest rate, construction costs, financial assets, and employment as independent variables. The study finds that there is a long-run relationship between house price and the independent variables, and that this long-run relationship can explain the increase in house prices that has been seen in Sweden over the last two decades, and that this suggests that a housing bubble is unlikely. Furthermore, the model finds that, in the long-run, house prices are positively associated with financial assets, and negatively associated with disposable income, interest rates, construction costs and employment rate.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Switanek, Matthew B., Peter A. Troch, Christopher L. Castro, Armin Leuprecht, Hsin-I. Chang, Rajarshi Mukherjee e Eleonora M. C. Demaria. "Scaled distribution mapping: a bias correction method that preserves raw climate model projected changes". COPERNICUS GESELLSCHAFT MBH, 2017. http://hdl.handle.net/10150/624439.

Testo completo
Abstract (sommario):
Commonly used bias correction methods such as quantile mapping (QM) assume the function of error correction values between modeled and observed distributions are stationary or time invariant. This article finds that this function of the error correction values cannot be assumed to be stationary. As a result, QM lacks justification to inflate/deflate various moments of the climate change signal. Previous adaptations of QM, most notably quantile delta mapping (QDM), have been developed that do not rely on this assumption of stationarity. Here, we outline a methodology called scaled distribution mapping (SDM), which is conceptually similar to QDM, but more explicitly accounts for the frequency of rain days and the likelihood of individual events. The SDM method is found to outperform QM, QDM, and detrended QM in its ability to better preserve raw climate model projected changes to meteorological variables such as temperature and precipitation.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Wegener, Duane Theodore. "The flexible correction model : using naive theories of bias to correct assessments of targets". Connect to resource, 1994. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1234615264.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Johansson, Nils. "Estimation of fatigue life by using a cyclic plasticity model and multiaxial notch correction". Thesis, Linköpings universitet, Mekanik och hållfasthetslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158095.

Testo completo
Abstract (sommario):
Mechanical components often possess notches. These notches give rise to stress concentrations, which in turn increases the likelihood that the material will undergo yielding. The finite element method (FEM) can be used to calculate transient stress and strain to be used in fatigue analyses. However, since yielding occurs, an elastic-plastic finite element analysis (FEA) must be performed. If the loading sequence to be analysed with respect to fatigue is long, the elastic-plastic FEA is often not a viable option because of its high computational requirements. In this thesis, a method that estimates the elastic-plastic stress and strain response as a result of input elastic stress and strain using plasticity modelling with the incremental Neuber rule has been derived and implemented. A numerical methodology to increase the accuracy when using the Neuber rule with cyclic loading has been proposed and validated for proportional loading. The results show fair albeit not ideal accuracy when compared to elastic-plastic finite element analysis. Different types of loading have been tested, including proportional and non-proportional as well as complex loadings with several load reversions. Based on the computed elastic-plastic stresses and strains, fatigue life is predicted by the critical plane method. Such a method has been reviewed, implemented and tested in this thesis. A comparison has been made between using a new damage parameter by Ince and an established damage parameter by Fatemi and Socie (FS). The implemented algorithm and damage parameters were evaluated by comparing the results of the program using either damage parameter to fatigue experiments of several different load cases, including non-proportional loading. The results are fairly accurate for both damage parameters, but the one by Ince tend to be slightly more accurate, if no fitted constant to use in the FS damage parameter can be obtained.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Mohapatra, Sucheta. "Development and quantitative assessment of a beam hardening correction model for preclinical micro-CT". Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3500.

Testo completo
Abstract (sommario):
The phenomenon of x-ray beam hardening (BH) has significant impact on preclinical micro-CT imaging systems. The causal factors are the polyenergetic nature of x-ray beam used for imaging and the energy dependence of linear attenuation coefficient of the imaged material. With increase in length of propagation of beam in the imaged object, lower energy photons in the projected beam become preferentially absorbed. The beam "hardens" (as average energy increases) and progressively becomes more penetrating, causing underestimation of the attenuation coefficient. When this phenomenon is not accounted for during CT reconstruction, it results in images with nonuniform CT number values across regions of uniform density. It leads to severe errors in quantitative applications of micro-CT and degradation in diagnostic quality of images. Hence, correction for beam hardening effect is of foremost importance and has been an active area of research since the advent of micro -CT. The Siemens Inveon micro-CT system uses a common linearization approach for BH correction. It provides a set of standard default coefficients to be applied during CT reconstruction. However, our initial experiments with uniform water phantoms of varying diameters indicated that the correction coefficients provided by default in the Inveon system are applicable for imaging mouse-size (~28 mm) objects only. For larger objects the correction factors yielded incorrect CT values along with characteristic 'cupping' observed in the uniform region in the center of the phantom. This study provides an insight into the nature and characteristics of beam hardening on the Inveon CT system using water phantoms of varying sizes. We develop and test a beam hardening correction scheme based on linearization using cylindrical water phantoms of two different diameters - 28 mm and 71 mm, selected to represent mouse and rat sizes respectively. The measured non-linear relationship between attenuation and length of propagation is fitted to a polynomial function, which is used to estimate the effective monoenergetic attenuation coefficient for water. The estimated effective linear attenuation coefficient value is used to generate the expected sum of attenuation coefficients along each x-ray path through the imaged object. The acquired poly-energetic data is then linearized to expected projections using a third order polynomial fit, which is consistent with the Inveon BH model and software. The coefficients of this trinomial are then applied for BH correction during CT reconstruction. Correction achieved with the proposed model demonstrates effective removal of the characteristic cupping artifact that was observed when default BHC coefficients were applied. In addition to water phantoms, we also test the effectiveness of the proposed scheme using solid cylindrical phantoms of three different densities and composition. The proposed method was also used to measure the BH effect for 12 different kVp/filtration combinations. By generating twelve distinct sets of BHC coefficients, for each setting, we achieve a significant expansion in the quantitative performance of the Inveon CT system.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Rajam, G. "The UK food chain : restructuring, strategies and price transmission". Thesis, University of Nottingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243617.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Mena, Andrade Ramiro Francisco. "Fast simulation : assisted shape correction after machining". Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671497.

Testo completo
Abstract (sommario):
Computer simulation is acknowledged as the third branch of Science. From the seminal work of Turner et al. (1956) [on Finite Elements, impressive improvements have been made in terms of software and hardware, so that, nowadays, there is no engineering device (in the broad sense of the word) that is not subjected to any kind of simulation. However, in the same way as there is a gap between theory and practice, at an industrial level there are still problems where, at the moment, numerical simulations are not used and instead, a hand-craft approach is applied. The mitigation of distortion in large aluminium forgings is one of these problems. Post-machining distortion is an open problem that affects every single large thickwalled aluminium forging assembled on an aircraft. The origin of distortion is the presence of residual stresses (RS) developed along the manufacturing chain, especially after the heat treatment of quenching. When machining takes places, it causes a redistribution at an internal level as the previous equilibrium state is broken by the material removal action. At a theoretical level, if the RS in a part are known in advance, a proper machining sequence could be planned with the aim to mitigate or counteract a warped geometry. This strategy is already implemented for parts machined from rolled plates, where RS can be considered as constant along the longitudinal direction. However, for forgings, RS are a function of the geometry and, as a result, a complex three-dimensional stress field is present. Important research efforts are done in order to predict RS numerically for forgings but still, the deterministic nature of numerical simulations are not able to capture the variable behaviour of distortions. At the moment, the current research direction looks at the distortion problem as something to avoid upwards in the manufacturing chain, it means, focusing the efforts in the steps before or during machining at latest. In the present thesis, we opted to follow the opposite direction, which means, how to proceed and handle distortion once it has arisen. To perform this task, the problem was first studied by conventional Finite Element Method (FEM), and then, by applying a non-intrusive Model Order Reduction (ROM) technique called Sparse Subspace Learning (SSL). The content of this thesis is structured as follows. In chapter 1 we introduce the distortion problem, followed by the definition and interconnection of the three main actors: residual stresses, distortion and reshaping. Then, a review of the available reshaping techniques is provided and finally, the challenges and perspectives for reshaping simulation are discussed. Chapter 2 presents two numerical models devoted to determine the residual stresses after quenching and plastic bending, respectively as the heat treatment is considered as the main source of residual stresses in aluminium forgings and the latter corresponds to the selected reshaping operation to be studied in this thesis. After its validation, both models are applied in the following chapters and they provide the reference solution to our problem. In chapter 3, we introduce the reshaping diagrams as a tool to assist the bending straightening operation. In addition, the residual stress free hypothesis is presented as an alternative to study the reshaping problem. This approach uses the distorted geometry as the main input to simulate the reshaping step by considering the part without residual stresses. Then, in chapter 4, a multiparametric study of bending straightening is provided with the aid of the SSL, so that the reshaping diagrams can be generalized for a previously defined set of parameters. Finally, as reshaping is an iterative and sequential procedure, in chapter 5, two consecutive bending straightening operations are simulated. Different reshaping strategies are studied and a methodology is provided to tackle in a more systematic way the open problem of reshaping.
Las distorsiones después del mecanizado de grandes piezas de aluminio son un problema recurrente para la industria aeronáutica. Estas desviaciones de la geometría de diseño se deben a la presencia de tensiones residuales, que se desarrollan a lo largo de la cadena de fabricación, especialmente después del tratamiento térmico de temple. Para restablecer la geometría nominal, es necesario realizar una serie de operaciones de remodelación muy manuales y que requieren mucho tiempo. El presente trabajo de investigación se centra en el desarrollo de herramientas eficaces de simulación numérica para ayudar a los operadores en el enderezamiento por flexión, que es una de las operaciones de remodelación más comunes. Para ello, se desarrolla un modelo de simulación de elementos finitos representativo de la cadena de fabricación, incluyendo el temple, el mecanizado y la remodelación, que permite predecir las tensiones y distorsiones residuales en piezas forjadas de aluminio de paredes gruesas. El modelo se valida con los datos experimentales que se encuentran en la literatura. A continuación, se introduce el concepto de diagramas de remodelación, una herramienta que permite seleccionar una carga de flexión casi óptima para minimizar la distorsión. Se muestra que el diagrama de remodelación no necesita tener en cuenta el campo de tensión residual, ya que su único efecto es desfasar horizontalmente el diagrama de remodelación por una cierta distancia. Por lo tanto, el comportamiento general que incluye un campo de tensión residual tridimensional real en una pieza forjada puede recuperarse desplazando el diagrama de remodelación libre de tensión residual por el desfase apropiado. Por último, se propone una estrategia para identificar el desfase sobre la marcha durante la operación de remodelación utilizando medidas sencillas de fuerza-desplazamiento. A continuación, se explora el uso de nuevas técnicas numéricas, especialmente la reducción del orden del modelo (MOR), con un doble propósito: i) acelerar el cálculo de los diagramas de remodelación; y ii) tener en cuenta varios parámetros del proceso, como la distorsión inicial o la configuración de la remodelación. Para ello, nos basamos en el método de Sparse Subspace Learning (SSL), un método MOR no intrusivo que permite reconstruir el espacio de solución directamente a partir de los resultados del modelo de elementos finitos. Con la solución paramétrica a mano, se puede encontrar la configuración óptima de remodelación en tiempo real, para minimizar la distorsión antes de lanzar la operación de remodelación real. Por último, se propone los primeros pasos hacia la ampliación de la metodología anterior, que combina los diagramas de remodelación y los métodos MOR, a un entorno multietapa en el que se realizan varias operaciones de corrección de la forma de manera secuencial.
La simulation numérique est reconnue comme étant la troisième branche de la science. À partir des travaux fondateurs de Turner et al. (1956) sur les éléments finis, des améliorations impressionnantes ont été apportées en termes de software et de hardware, de sorte qu’aujourd’hui, il n’existe plus aucun dispositif d’ingénierie (au sens large du terme) qui ne soit pas transcrit par une quelconque simulation.Cependant, tout comme il existe un fossé entre théorie et pratique, il existe encore des problèmes au niveau industriel où, pour le moment, les simulations numériques n’ont pas encore pris la place des approches artisanales. L’atténuation de la distorsion dans les grandes pièces forgées en aluminium est l’un des exemples.La distorsion post-usinage est un problème ouvert qui affecte chaque grande pièce forgée en aluminium à paroi épaisse assemblée sur un avion. Cette distorsion provient de la présence de contraintes résiduelles (RS) développées tout au long de la chaine de fabrication, en particulier après traitement thermique par trempe.Lorsque l’usinage a lieu, il provoque une redistribution à un niveau interne car l´état d’équilibre précédent est rompu par l’action d’enlèvement de matière. Au niveau théorique, si les RS d’une pièce sont connues à l’avance, une séquence d’usinage appropriée pourrait être planifiée dans le but d’atténuer ou de contrecarrer une géométrie déformée. Cette stratégie est déjà mise en œuvre pour les pièces usinées à partir de tôles laminées, où la RS peut être considérée comme constante dans la direction longitudinale. Cependant, pour les pièces forgées, les RS sont fonction de la géométrie et, par conséquent, un champ de contrainte tridimensionnel complexe est présent. D’importants efforts de recherche sont réalisés afin de prédire numériquement les RS pour les pièces forgées, mais la nature déterministe des simulations numériques ne permet toujours pas de saisir le comportement variable des déformations. Pour l’instant, l’orientation actuelle de la recherche considère le problème des distorsions comme quelque chose à éviter depuis le début de la chaine de fabrication, c’est-à-dire en concentrant les efforts dans les étapes précédentes, où au plus tard pendant l’usinage. Dans la présente thèse, nous avons choisi de suivre la direction opposée, c’est-à-dire comment procéder et gérer la distorsion une fois qu’elle est apparue. Pour réaliser cette tâche, le problème a d’abord été étudié par la méthode classique des éléments finis (FEM), et par la suite en appliquant une technique non intrusive de réduction de l’ordre des modèles (ROM) appelée ”Sparse Subspace Learning” (SSL).Le contenu de cette thèse est structuré comme suit. Dans le chapitre 1 le problème de la distorsion sera introduit, suivi de la définition et de l’interconnexion des trois acteurs principaux : les contraintes résiduelles, la distorsion et le redressage. Les techniques de redressage seront ensuite passées en revue pour finalement examiner les défis et les perspectives de la simulation du redressage. Le chapitre 2 présente deux modèles numériques consacres à la détermination des contraintes résiduelles après la trempe et le pliage plastique, le traitement thermique étant considéré comme la principale source de contraintes résiduelles dans les pièces forgées en aluminium. C’est dans ce cadre (origine majoritairement thermique des contraintes résiduelles) que sera étudiée l’opération de redressage sélectionnée dans cette thèse.Après leurs validations, les deux modèles sont appliqués dans les chapitres suivants où ils constitueront la solution de référence du problème. Dans le chapitre 3, les diagrammes de mise en forme sont présentés comme un outil d’aide à l’opération de redressement par flexion. En outre, l’hypothèse reste sans contrainte résiduelle est présentée comme une alternative pour étudier le problème de redressage. Cette approche utilise la géométrie déformée comme entrée principale permettant de simuler l’étape de redressage en considérant la pièce sans contraintes résiduelles. Le chapitre 4 montrera une étude multiparamétrique du redressement par flexion à l’aide du SSL.Les diagrammes de redressage seront généralisés pour un ensemble de paramètres préalablement définis. Le redressage étant une procédure itérative et séquentielle, le chapitre 5 explicitera la simulation de deux opérations consécutives de redressement par flexion. Différentes stratégies de redressage seront étudiées, et une méthodologie sera fournie pour aborder de manière plus systématique le problème ouvert du redressage.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Högström, Martin. "Wind Climate Estimates - Validation of Modelled Wind Climate and Normal Year Correction". Thesis, Uppsala University, Air and Water Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8023.

Testo completo
Abstract (sommario):

Long time average wind conditions at potential wind turbine sites are of great importance when deciding if an investment will be economically safe. Wind climate estimates such as these are traditionally done with in situ measurements for a number of months. During recent years, a wind climate database has been developed at the Department of Earth Sciences, Meteorology at Uppsala University. The database is based on model runs with the higher order closure mesoscale MIUU-model in combination with long term statistics of the geostrophic wind, and is now used as a complement to in situ measurements, hence speeding up the process of turbine siting. With this background, a study has been made investigating how well actual power productions during the years 2004-2006 from 21 Swedish wind turbines correlate with theoretically derived power productions for the corresponding sites.

When comparing theoretically derived power productions based on long term statistics with measurements from a shorter time period, correction is necessary to be able to make relevant comparisons. This normal year correction is a main focus, and a number of different wind energy indices which are used for this purpose are evaluated. Two publicly available (Swedish and Danish Wind Index) and one derived theoretically from physical relationships and NCEP/NCAR reanalysis data (Geostrophic Wind Index). Initial testing suggests in some cases very different results when correcting with the three indices and further investigation is necessary. An evaluation of the Geostrophic Wind Index is made with the use of in situ measurements.

When correcting measurement periods limited in time to a long term average, a larger statistical dispersion is expected with shorter measurement periods, decreasing with longer periods. In order to investigate this assumption, a wind speed measurement dataset of 7 years were corrected with the Geostrophic Wind Index, simulating a number of hypothetical measurement periods of various lengths. When normal year correcting a measurement period of specific length, the statistical dispersion decreases significantly during the first 10 months. A reduction to about half the initial statistical dispersion can be seen after just 5 months of measurements.

Results show that the theoretical normal year corrected power productions in general are around 15-20% lower than expected. A probable explanation for the larger part of this bias is serious problems with the reported time-not-in-operation for wind turbines in official power production statistics. This makes it impossible to compare actual power production with theoretically derived without more detailed information. The theoretically derived Geostrophic Wind Index correlates well to measurements, however a theoretically expected cubed relationship of wind speed seem to account for the total energy of the wind. Such an amount of energy can not be absorbed by the wind turbines when wind speed conditions are a lot higher than normal.


Vindklimatet vid tänkbara platser för uppförande av vindkraftverk är avgörande när det beslutas huruvida det är en lämplig placering eller ej. Bedömning av vindklimatet görs vanligtvis genom vindmätningar på plats under ett antal månader. Under de senaste åren har en vindkarteringsdatabas utvecklats vid Institutionen för Geovetenskaper, Meteorologi vid Uppsala universitet. Databasen baseras på modellkörningar av en högre ordningens mesoskale-modell, MIUU-modellen, i kombination med klimatologisk statistik för den geostrofiska vinden. Denna används numera som komplement till vindmätningar på plats, vilket snabbar upp bedömningen av lämpliga platser. Mot denna bakgrund har en studie genomförts som undersöker hur bra faktisk energiproduktion under åren 2004-2006 från 21 vindkraftverk stämmer överens med teoretiskt härledd förväntad energiproduktion för motsvarande platser. Om teoretiskt härledd energiproduktion baserad på långtidsstatistik ska jämföras med mätningar från en kortare tidsperiod måste korrektion ske för att kunna göra relevanta jämförelser. Denna normalårskorrektion genomförs med hjälp av olika vindenergiindex. En utvärdering av de som finns allmänt tillgängliga (Svenskt vindindex och Danskt vindindex) och ett som härletts teoretiskt från fysikaliska samband och NCEP/NCAR återanalysdata (Geostrofiskt vindindex) görs. Inledande tester antyder att man får varierande resultat med de tre indexen och en djupare utvärdering genomförs, framförallt av det Geostrofiska vindindexet där vindmätningar används för att söka verifiera dess giltighet.

När kortare tidsbegränsade mätperioder korrigeras till ett långtidsmedelvärde förväntas en större statistisk spridning vid kortare mätperioder, minskande med ökande mätlängd. För att undersöka detta antagande används 7 års vindmätningar som korrigeras med det Geostrofiska vindindexet. I detta simuleras ett antal hypotetiskt tänkta mätperioder av olika längd. När en mätperiod av specifik längd normalårskorrigeras minskar den statistiska spridningen kraftigt under de första 10 månaderna. En halvering av den inledande statistiska spridningen kan ses efter endast 5 månaders mätningar.

Resultaten visar att teoretiskt härledd normalårskorrigerad energiproduktion generellt är ungefär 15-20% lägre än väntat. En trolig förklaring till merparten av denna skillnad är allvarliga problem med rapporterad hindertid för vindkraftverk i den officiella statistiken. Något som gör det omöjligt att jämföra faktisk energiproduktion med teoretiskt härledd utan mer detaljerad information. Det teoretiskt härledda Geostrofiska vindindexet stämmer väl överens med vindmätningar. Ett teoretiskt förväntat förhållande där energi är proportionellt mot kuben av vindhastigheten visar sig rimligen ta hänsyn till den totala energin i vinden. En sådan energimängd kan inte tas till vara av vindkraftverk när vindhastighetsförhållandena är avsevärt högre än de normala.

Gli stili APA, Harvard, Vancouver, ISO e altri
35

Liu, Wenjie. "Estimation and bias correction of the magnitude of an abrupt level shift". Thesis, Linköpings universitet, Statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84618.

Testo completo
Abstract (sommario):
Consider a time series model which is stationary apart from a single shift in mean. If the time of a level shift is known, the least squares estimator of the magnitude of this level shift is a minimum variance unbiased estimator. If the time is unknown, however, this estimator is biased. Here, we first carry out extensive simulation studies to determine the relationship between the bias and three parameters of our time series model: the true magnitude of the level shift, the true time point and the autocorrelation of adjacent observations. Thereafter, we use two generalized additive models to generalize the simulation results. Finally, we examine to what extent the bias can be reduced by multiplying the least squares estimator with a shrinkage factor. Our results showed that the bias of the estimated magnitude of the level shift can be reduced when the level shift does not occur close to the beginning or end of the time series. However, it was not possible to simultaneously reduce the bias for all possible time points and magnitudes of the level shift.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Gretton, Jeremy David. "Perceived Breadth of Bias as a Determinant of Bias Correction". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1499097376679535.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Brandt, Oskar, e Rickard Persson. "The relationship between stock price, book value and residual income: A panel error correction approach". Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-254344.

Testo completo
Abstract (sommario):
In this paper we examine the short and long-term relations between stock price, book value and residual income.  We employ a panel error correction model, estimated with Engle & Granger’s (1987) two-step procedure and the single equation methodology. The models are estimated with FE-OLS and the MG-estimator. We find that stock prices adjust previous periods equilibrium error. Further, we find that book value has short and long-term effects on stock prices. Finally, this paper finds mixed results regarding residual incomes impact on stock prices. The MG-estimator finds evidence for a short-term relationship, while the FE-OLS provides insignificant or weak support for short-term effects. FE-OLS and MG-estimator find insignificant or weak support regarding residual incomes long-term effects.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Buendía, Rubén. "Hook Effect on Electrical Bioimpedance Spectroscopy Measurements. Analysis, Compensation and Correction". Thesis, Högskolan i Borås, Institutionen Ingenjörshögskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-19565.

Testo completo
Abstract (sommario):
Nowadays, the Electrical Bioimpedance (EBI) measurements have become a commonpractice as they are useful for different clinical applications. EBI technology and EBImeasurement systems are relatively simple when compared to other type of medicalinstrumentation. But even in such simple measurement systems measurement artifact mayoccur. One of the most common artifacts present measurements is the Hook Effect, which isidentifiable by the hook-alike deviation on the EBI data that it produces on the impedanceplot.The Hook Effect influences typical EBI data analysis processes like Cole fitting processand the estimation of the Cole parameters, which are critical for several EBI applications.Therefore the Hook Effect must be corrected, compensated or removed before the any fittingprocess.With the goal of finding a reliable correction method the origin and the impact on theEBI measurement of the Hook Effect is studied in this thesis. The currently used Tdcompensation method is also studied and a new approach for compensation and correction ispresented.The results indicate that the proposed method truly corrects the Hook Effect and that themethodology to select the correcting parameters is solid based on the origin of the Hook Effectand it is extracted from the EBI measurement it-self avoiding any external dependency.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Palanisamy, Bakkiyalakshmi. "Evaluation of SWAT model - subdaily runoff prediction in Texas watersheds". Texas A&M University, 2003. http://hdl.handle.net/1969.1/5921.

Testo completo
Abstract (sommario):
Spatial variability of rainfall is a significant factor in hydrologic and water quality modeling. In recent years, characterizing and analyzing the effect of spatial variability of rainfall in hydrologic applications has become vital with the advent of remotely sensed precipitation estimates that have high spatial resolution. In this study, the effect of spatial variability of rainfall in hourly runoff generation was analyzed using the Soil and Water Assessment Tool (SWAT) for Big Sandy Creek and Walnut Creek Watersheds in North Central Texas. The area of the study catchments was 808 km2 and 196 km2 for Big Sandy Creek and Walnut Creek Watersheds respectively. Hourly rainfall measurements obtained from raingauges and weather radars were used to estimate runoff for the years 1999 to 2003. Results from the study indicated that generated runoff from SWAT showed enormous volume bias when compared against observed runoff. The magnitude of bias increased as the area of the watershed increased and the spatial variability of rainfall diminished. Regardless of high spatial variability, rainfall estimates from weather radars resulted in increased volume of simulated runoff. Therefore, weather radar estimates were corrected for various systematic, range-dependent biases using three different interpolation methods: Inverse Distance Weighting (IDW), Spline, and Thiessen polygon. Runoff simulated using these bias adjusted radar rainfall estimates showed less volume bias compared to simulations using uncorrected radar rainfall. In addition to spatial variability of rainfall, SWAT model structures, such as overland flow, groundwater flow routing, and hourly evapotranspiration distribution, played vital roles in the accuracy of simulated runoff.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Gqozo, Pamela. "Impact of oil price on tourism in South Africa: an error correction model (ECM) analysis". Thesis, University of Fort Hare, 2013. http://hdl.handle.net/10353/d1017941.

Testo completo
Abstract (sommario):
The study focuses on the impact of oil price on tourism in South Africa. Quarterly time series data for the period 1990 to 2012 was used in this study. Error correction model is the research instrument that was used to determine the impact of oil price on tourism in South Africa. The explanatory variables in this study are oil price, real exchange rates, gross domestic product, consumer price index and transport infrastructure investment. The results of the study revealed that oil price, consumer price index and real exchange rate have a negative long run relationship on tourism, while gross domestic product and transport infrastructure investment had a positive long run relationship on tourism. It was also shown that oil price is statistically significant relationship on tourism.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Nastansky, Andreas, Alexander Mehnert e Hans Gerhard Strohe. "A vector error correction model for the relationship between public debt and inflation in Germany". Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/5024/.

Testo completo
Abstract (sommario):
In the paper, the interaction between public debt and inflation including mutual impulse response will be analysed. The European sovereign debt crisis brought once again the focus on the consequences of public debt in combination with an expansive monetary policy for the development of consumer prices. Public deficits can lead to inflation if the money supply is expansive. The high level of national debt, not only in the Euro-crisis countries, and the strong increase in total assets of the European Central Bank, as a result of the unconventional monetary policy, caused fears on inflating national debt. The transmission from public debt to inflation through money supply and long-term interest rate will be shown in the paper. Based on these theoretical thoughts, the variables public debt, consumer price index, money supply m3 and long-term interest rate will be analysed within a vector error correction model estimated by Johansen approach. In the empirical part of the article, quarterly data for Germany from 1991 by 2010 are to be examined.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Chang, Che-Yu, e 張哲豫. "Semi-Automatic Skin Color Model Correction". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/74745746512891794969.

Testo completo
Abstract (sommario):
碩士
國立臺灣海洋大學
資訊工程學系
104
In this thesis, a novel semi-automatic skin model correction method is presented. The method improves the accuracy of skin segmentation under different lighting conditions. We use color temperature as light color, and use the complexion of user's hand as sample to estimate the color temperature of environment. Then use the color temperature of environment to correct skin color model. We also provided some testing results to proof our method improving the accuracy of skin segmentation, and to proof our method better than full-automatic methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

WANG, ZHONG-DING, e 王鐘頂. "Finite element model identification and correction using experimental modal data". Thesis, 1992. http://ndltd.ncl.edu.tw/handle/92428091860947499705.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Hsu, Chao-Chih, e 許超智. "A Fuzzy CMAC Model for Color Correction". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/12882947652501509848.

Testo completo
Abstract (sommario):
碩士
國立成功大學
資訊及電子工程研究所
83
The Albus's Cerebellar Model Articulation Controller (CMAC) has been used in many practical areas with considerable success and capable of learning nonlinear functions extremely quickly due to the local nature in weight updating. Besides, the higher -order CMAC model proposed by Stephen and David adopts B-Spline receptive field functions and a more general addressing scheme for weight retrieving, which can learn both functions and func- tion derivatives. In this thesis, we present a three-layered fuzzy CMAC network, which takes the bell-shape membership func- tions as the receptive field functions and use the centroid of area(COA) approach as the defuzzification interface. The learn- ing algorithm is based on the maximum gradient method. For the situation of insufficient and irregularly distributed training patterns, we propose a sampling method based on interpolation scheme to generate the proper training patterns. The proposed fuzzy CMAC model is basically a table look-up model in whih fuzzy weights are stored and manipulated by using fuzzy set theory. This model adaptively adjusts the weights according to sample data to approximate the nonlinear continuous functions. Finally, we take some experiments including gerneral function approximation and color correction to verify the proposed model.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Shih, Cheng-Ting, e 施政廷. "Metal artifact correction using model-based images". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/gs2qa4.

Testo completo
Abstract (sommario):
碩士
中臺科技大學
放射科學研究所
97
Computed tomography (CT) provides various diagnosis information which are beneficial and convenient for clinical medicine today. However, high-density metal implants in the CT scans induce metal artifact and compromise image quality. In this study, we proposed a model-based metal artifact correction method. First, we built a model image using k-means clustering technique and removed the errors within the clustering results by local statistics of spatial information and image inpainting. The difference between the original image and model image were then calculated, and the projection data of the original image and model image were then combined together by a weighting factor estimated from an exponential weighting function. At last, the corrected image was reconstructed using the filtered back-projection method. Four case images which form the scan results of cylindrical water phantom, pelvic, oral cavity and hip joint were used to test the correction ability of our algorithm. All of the correction results show that our algorithm can effectively removed the metal artifact arising from metal objects and significant improved the image continuity and image uniformity. Furthermore, the results of surface rendering and volume rendering of oral CT image and pelvic CT image were also recovered. We conclude that metal artifact correction method proposed in this study is useful for reducing the metal artifact.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Yang, Ru-Yi, e 楊儒易. "Hotspot Guided Model-based Optical Proximity Correction". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/77796771308638049052.

Testo completo
Abstract (sommario):
碩士
中原大學
資訊工程研究所
99
In recent years, semiconductor manufacturing process has made great progress. To avoid lithography hotspots and enhance integrated circuit (IC) yield, we can use Model-based Optical Proximity Correction (Model-based OPC) to improve image fidelity and printability. The most vexing problem is the time-consuming calculation for optical simulation of Model-based OPC, therefore we have to do some tradeoff between the execution time and the accuracy of OPC procedure. This paper proposed a Model-based OPC flow which is roughly divided into three major parts. First, a fast lithography simulation technique used to obtain the mask aerial image efficiently. Second, a scanning method used to scan the whole mask design with a partition technique. Third, determining the hotspot cost defined by ourselves for each partition region to control the convergence of Model-based OPC feedback system, this incorporates with some control factors to adjust the solution quality. With the above approaches and a well-designed data structure, our procedure can reduced the calculation time of Model-based OPC and improve the mask fidelity and printability effectively. By the experimental results, we can observe that our Model-based OPC can obtain a high-resolution solution and the procedure can be completed within the convergence being set by ourselves.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Chang, Yun-Hua, e 張芸華. "The Attitude Correction Model on High Technology Products". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35418853930164420160.

Testo completo
Abstract (sommario):
碩士
國立清華大學
科技管理研究所
93
Occasionally people attempt to correct their initial perceptions or judgments because of potentially biasing factors. For example, when people read newspapers or watch television, it is not unusually that a given endorser may endorse many different products or different brands at the same period of time. When such a situation (i.e., multiple endorsement) occurs, people may judge the advertisement differently from when the endorser just endorses one and only one product or brand. The literatures in advertisement research have emphasized on the endorser attributes, attitude toward the advertisement and correspondent strategies for practitioners, rather than on the discussion of information process under the condition when a given endorser is repeatedly exposed to audiences. On the other hand, even though the literatures in psychology have already developed solid evidences for explaining the mechanisms of attitude change, previous studies which examined attitude change and explained bias correction only used prompts to make subjects aware of those biasing factors, referred to as “enforced-correction” in this study. Nonetheless, in reality, people are less likely to perceive prompts in advertisements. That is, people in a regular consumption setting will be less likely to engender “enforced-correction” as studied in literature of social psychology. In contrast, people will be more likely to correct the perceived biases in the regular consumption setting by themselves, referred to as “self-activated-correction” process. It is proposed in this study that the correction patterns resulted form “self-activated-correction” process will be quite different from which resulted from “enforced-correction” process. In this study, a 2x2x2 (positive vs. neutral endorser/high vs. low involvement /enforced vs. self-activated-correction process) experiment will be conducted to investigate direction and degree of correction for perceived bias (i.e., multiple endorsements) in persuasion situations. The students in Sung-Shang Vocation School will be asked to express their attitudes about a product after being exposed to a magazine under condition of either high/low product involvement as well as whether being given an explicit correction instruction or not (i.e., self-activated-correction or enforced-correction). The advertisements will have same arguments for the product and feature either same or different endorser(s). That is, a subject read a persuasive message from a famous endorser (i.e., positive endorser) or unfamiliar endorser (i.e., neutral endorser) who endorses particular product (i.e., Notebook or non-Notebook) in the experiment booklet. This research is meant to find out the different correction processes.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Wang, Chun-Kun, e 王俊昆. "Automatic Validation and Correction of OpenMP Tasking Model". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/35853108475132584005.

Testo completo
Abstract (sommario):
碩士
國立中正大學
資訊工程研究所
99
Shared-memory multiprocessor architecture is becoming a mainstream trend in modern computer system,and OpenMP, Open Multi-Processing, is one of the most important programming approaches for this architecture. OpenMP supports a simple and flexible interface to develop portable and scalable parallel applications for C, C++, and Fortran programs. In addition, OpenMP tasking model proposed in OpenMP-3.0 standard allows programmers to exploit the parallelism of irregular and dynamic structures in programs. According to the design philosophy of OpenMP, programmers are supposed to analyze data dependencies, race conditions, and deadlocks, and to use correct OpenMP directives and APIs to produce a conforming program. It is more and more error-prone and difficult for programmers to correctly handle OpenMP directives, while programs are getting complicated. In this paper, we proposed an algorithm for automatic validation and correction of OpenMP tasking model. The proposed algorithm has been implemented based on the ROSE compiler infrastructure. Experimental results show that the proposed technique can successfully validate and correct the tested benchmark programs.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Noy, Dominic. "Parameter estimation of the linear phase correction model by mixed-effects models". Master's thesis, 2017. http://hdl.handle.net/1822/50021.

Testo completo
Abstract (sommario):
Dissertação de mestrado em Science in Statistics
The control of human motor timing is captured by cognitive models that make assumptions about the underlying information processing mechanisms. A paradigm for its inquiry is the Sensorimotor Synchronization (SMS) task, in which an individual is required to synchronize the movements of an effector, like the finger, with repetitive appearing onsets of an oscillating external event. The Linear Phase Correction model (LPC) is a cognitive model that captures the asynchrony dynamics between the finger taps and the event onsets. It assumes cognitive processes that are modeled as independent random variables (perceptual delays, motor delays, timer intervals). There exist methods that estimate the model parameters from the asynchronies recorded in SMS tasks. However, while many natural situations show only very short synchronization periods, the previous methods require long asynchrony sequences to allow for unbiased estimations. Depending on the task, long records may be hard to obtain experimentally. Moreover, in typical SMS tasks, records are repetitively taken to reduce biases. Yet, by averaging parameter estimates from multiple observations, the existing methods do not most appropriately exploit all available information. Therefore, the present work is a new approach of parameter estimation to integrate multiple asynchrony sequences. Based on simulations from the LPC model, we first demonstrate that existing parameter estimation methods are prone to bias when the synchronization periods become shorter. Second, we present an extended Linear Model (eLM) that integrates multiple sequences within a single model and estimates the model parameters of short sequences with a clear reduction of bias. Finally, by using Mixed-E ects Models (MEM), we show that parameters can also be retrieved robustly when there is between-sequence variability of their expected values. Since such between-sequence variability is common in experimental and natural settings, we herewith propose a method that increases the applicability of the LPC model. This method is now able to reduce biases due to fatigue or attentional issues, for example, bringing an experimental control that previous methods are unable to perform.
O controlo de factores temporais que ocorrem na execução de movimentos é captado por modelos cognitivos. Estes modelos são aproximações do processamento de informação,que ocorre no sistema nervoso. Para investigar este processo é utilizada a "Sensorimotor Synchronization Task" (SMS) que consiste em sincronizar os movimentos, por exemplo, de um dedo com eventos externos repetitivos. O "Linear Phase Correction Model" (LPC) permite prever a evolução da diacronia entre o movimento e o evento externo. Este modelo inclui variáveis aleatórias independentes, tais como atrasos no processamento da informação e execução da resposta. Para se estimar os parâmetros do LPC são utilizados métodos que incluem as diacronias obtidas na SMS. Estes métodos precisam de sequências longas, no entanto o sincronismo veri fica-se durante curtos períodos de tempo. Além disso, registam-se observações múltiplas para diminuir o viés na estimativa. Contudo, recorrendo à média de múltiplas estimativas, nem toda a informação disponível é considerada. Com vista a colmatar as lacunas identi ficadas, este trabalho apresenta uma nova abordagem ao nível da estimativa dos parâmetros. Num primeiro momento, com base em simulações do LPC, demonstramos que os métodos existentes são enviesados, quando as sequências são curtas. Num segundo momento, apresentamos o "extended Linear Model" (eLM) que integra diacronias múltiplas no mesmo modelo. Por fim, usando o "Mixed-Effects Model" (MEM), mostramos que os parâmetros podem ser estimados quando os valores esperados variam entre sequências. Uma vez que tal variabilidade é frequente e observável em contexto real, o método desenvolvido neste trabalho permite maior aplicabilidade do modelo LPC e reduz o viés causado por factores relacionados com problemas de atenção e de fadiga, introduzindo um novo controlo experimental.
Fundação para a Ciência e Tecnologia (FCT) - Project UID/MAT/00013/2013
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Chen, Chuan-Yi, e 陳川鎰. "Study of Correction Model for Correcting Residual Capacity of Lithium-Ion Battery by Current Compensation Method". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/38dc6w.

Testo completo
Abstract (sommario):
碩士
國立臺灣科技大學
自動化及控制研究所
107
Nowadays, the increasingly extensive application of Lithium batteries proposes a more accurate requirement for the estimation of power consumption. Coulomb counting method is the commonly used way to estimate Lithium battery power in the market, but it will become more inaccurate in actual. For this reason, many studies on compensation methods are conducted. However, the in-depth understanding finds the shortcomings are widespread. There are two most important problems. One is the compensation method after battery attenuation, which is too complicated and time-consuming to be performed by ordinary users. The other is that there is no discussion or research on the interaction between current and power charge & discharge, which will finally expand the error of power estimation.On this basis, the simple and accessible charge & discharge data is adopted to construct a set of mathematical model for state-of-charge (SOC). Meanwhile, this model is expanded to be universal for dissimilar batteries so that the new battery can establish its own mathematical model of remnant capacity by cycling charge & discharge for several times. Moreover, each cell in the new battery can even keep rectifying its mathematical model during use so as to accurately enhance battery estimation. By means of reacting in advance and constantly adjusting the model, this set of model can also match the attenuated battery to authentically enable each cell to customize its mathematical model. The model allows users to optimize charge and discharge parameters so as to slow down the battery attenuation. In this study, it is empirically verified that the complete discharge error is within 2% and the error after 50% of discharge is about 3%. This mathematical model is applicable to manufacturers using Lithium battery, such as the producers of battery, cell phone and electric vehicle, which can be further developed into software and charger, etc.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia