Dissertations / Theses on the topic 'RMSHE'

To see the other types of publications on this topic, follow the link: RMSHE.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'RMSHE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

SAH, BIKASH KUMAR. "A NOVEL CONVOLUTIONAL NEURAL NETWORK FOR AIR POLLUTION FORECASTING." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18792.

Full text
Abstract:
Air pollution was a global problem a few decades back. It is still a problem and will continue to be a problem if not solved appropriately.Various machine learning and deep learining approaches have been purposed for accurate prediction, estimation and analysis of the air polution. We have purposed a novel five layer one-dimensional convolution neural network architecture to forecast the PM2.5 concentration. It is a deep learning approach. We have used the five year air pollution dataset from 2010 to 2014 recorded by the US embassy in Beijing, China taken from the database from UCI machine learining repository [19]. The dataset we are considering is in the .csv format. The dataset consists of feature columns like “Number,” “year,” “month,” “day,” “PM2.5”, “PM10”, “S02”, “dew,” “temp,” “pressure,” “wind direction,” “wind direction,” “snow” and “rain.” The dataset consisted of a total of 43,324 rows and nine feature columns.The model yields the best results in predicting PM2.5 levels with an RMSE of 28.1309 and MAE of 14.9727. On statistical analysis we found that ur proposed prediction model outperformed the traditional forecasting models like DTR, SVR and ANN models for the air pollution forecasting.
APA, Harvard, Vancouver, ISO, and other styles
2

Chermiti, Amro. "Hur kan injicerad aktivitet individanpassas vid skelettscintigrafi? Effekten av patientspecifika parametrar." Thesis, Örebro universitet, Institutionen för hälsovetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-84602.

Full text
Abstract:
Bakgrund: Skelettscintigrafi är en nuklearmedicinsk undersökning. Undersökningen är den mest använda nukleardiagnostiska metoden och den genomförs ofta som en helkroppsundersökning. För att undersökningen ska kunna erhålla sin diagnostiska kvalitet, samt följa strålsäkerhetsmyndighetens rekommendationer behövs det mer kännedom till hur optimeringen ska följa as low as reasonably achievable (ALARA). Studiens syfte var att optimera patientstråldos samt att undersöka hur injicerad aktivitet kan anpassas efter patientens specifika parametrar. Metod: Studiegruppen bestod av 85 patienter som genomgick skelettscintigrafier vid Central sjukhuset i Karlstad, från perioden februari-april 2020. Resultat: Visade att både ålder och vikt är patientspecifika variabler som borde tas till betraktning vid bestämning av injicerad strålningsdos. Konklusionen: För att optimera undersökningen för varje patient bör injicerad aktivitet anpassas efter både kroppsvikt och ålder. Fler studier där andra parametrar undersöks måste genomföras.
Background: Bone scintigraphy is a nuclear medicine procedure. It is the most used nuclear diagnostic method and provides the opportunity to perform a full-body examination. For the method to retain its diagnostic quality, and to follow the recommendations of the Radiation Safety Authority, more knowledge is required on how the optimization should follow as low as reasonably achievable (ALARA). The purpose of the study was to optimize patient radiation dose and to investigate how the injected activity can be adapted to patient-specific parameters. Method: The study group consisted of 85 patients who underwent bone scintigraphy at the Central Hospital in Karlstad, from the period February-April 2020. Result: Showed that age and weight are patient-specific variables that should be considered when determining injected radiation dose. Conclusion: To optimize the examination for each patient, injected activity should be adjusted according to the patient’s body weight and age. More studies in where other parameters are investigated must be carried out.
APA, Harvard, Vancouver, ISO, and other styles
3

Hast, Isak. "Quality Assessment of Spatial Data: Positional Uncertainties of the National Shoreline Data of Sweden." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-18743.

Full text
Abstract:
This study investigates on the planimetric (x, y) positional accuracy of the National Shoreline (NSL) data, produced in collaboration between the Swedish mapping agency Lantmäteriet and the Swedish Maritime Administration (SMA). Due to the compound nature of shorelines, such data is afflicted by substantial positional uncertainties. In contrast, the positional accuracy requirements of NSL data are high. An apparent problem is that Lantmäteriet do not measure the positional accuracy of NSL in accordance to the NSL data product specification. In addition, currently, there is little understanding of the latent positional changes of shorelines affected by the component of time, in direct influence of the accuracy of NSL. Therefore, in accordance to the two specific aims of this study, first, an accuracy assessment technique is applied so that to measure the positional accuracy of NSL. Second, positional time changes of NSL are analysed. This study provides with an overview of potential problems and future prospects of NSL, which can be used by Lantmäteriet to improve the quality assurance of the data. Two line-based NSL data sets within the NSL classified regions of Sweden are selected. Positional uncertainties of the selected NSL areas are investigated using two distinctive methodologies. First, an accuracy assessment method is applied and accuracy metrics by the root-means-square error (RMSE) are derived. The accuracy metrics are checked toward specification and standard accuracy tolerances. Results of the assessment by the calculated RMSE metrics in comparison to tolerances indicate on an approved accuracy of tested data. Second, positional changes of NSL data are measured using a proposed space-time analysis technique. The results of the analysis reveal significant discrepancies between the two areas investigated, indicating that one of the test areas are influenced by much greater positional changes over time. The accuracy assessment method used in this study has a number of apparent constraints. One manifested restriction is the potential presence of bias in the derived accuracy metrics. In mind of current restrictions, the method to be preferred in assessment of the positional accuracy of NSL is a visual inspection towards aerial photographs. In regard of the result of the space-time analysis, one important conclusion can be made. Time-dependent positional discrepancies between the two areas investigated, indicate that Swedish coastlines are affected by divergent degrees of positional changes over time. Therefore, Lantmäteriet should consider updating NSL data at different time phases dependent on the prevailing regional changes so that to assure the currently specified positional accuracy of the entire data structure of NSL.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdelhafeid, Faraj. "The Effect Upon Antenna Arrays of Variations of Element Orientation and Spacing in the Presence of Channel Noise, with an Application to Direction Finding." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1525866099535246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cantarello, Luca. "Analisi delle previsioni meteorologiche mensili mediante il modello GLOBO (ISAC-CNR)." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7690/.

Full text
Abstract:
In questo lavoro sono presentate le principali caratteristiche delle previsioni meteorologiche mensili, nonché il progresso scientifico e storico che le ha coinvolte e le tecniche adibite alla loro verifica. Alcune di queste tecniche sono state applicate al fine di valutare ed analizzare l'errore sistematico (o bias) e l'RMSE di temperatura a 850 hPa (T850), altezza geopotenziale a 500 hPa (Z500) e precipitazioni cumulate del modello GLOBO, utilizzato presso l'Istituto per le Scienze dell'Atmosfera e del Clima del Consiglio Nazionale delle Ricerche per formulare previsioni mensili. I risultati mostrano la progressione temporale dell'errore, che aumenta nelle prime due settimane di integrazione numerica fino a stabilizzarsi tra la terza e la quarta. Ciò mostra che il modello, persa l'influenza delle condizioni iniziali, raggiunge un suo stato che, per quanto fisiologicamente distante da quello osservato, tende a stabilizzarsi e a configurarsi quindi come sistematico (eventualmente facilitandone la rimozione in fase di calibrazione delle previsioni). Il bias di T850 e Z500 presenta anomalie negative prevalentemente lungo le zone equatoriali, e vaste anomalie positive sulle aree extra-tropicali; quello delle precipitazioni mostra importanti sovrastime nelle zone continentali tropicali. La distribuzione geografica dell'RMSE (valutato solo per T850 e Z500) riscontra una generale maggiore incertezza nelle zone extra-tropicali, specie dell'emisfero settentrionale e nei mesi freddi.
APA, Harvard, Vancouver, ISO, and other styles
6

Mansour, Tony, and Majdi Murtaja. "Applied estimation theory on power cable as transmission line." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46583.

Full text
Abstract:
This thesis presents how to estimate the length of a power cable using the MaximumLikelihood Estimate (MLE) technique by using Matlab. The model of the power cableis evaluated in the time domain with additive white Gaussian noise. The statistics havebeen used to evaluate the performance of the estimator, by repeating the experiment fora large number of samples where the random additive noise is generated for each sample.The estimated sample variance is compared to the theoretical Cramer Raw lower Bound(CRLB) for unbiased estimators. At the end of thesis, numerical results are presentedthat show when the resulting sample variance is close to the CRLB, and hence that theperformance of the estimator will be more accurate.
APA, Harvard, Vancouver, ISO, and other styles
7

Reigota, Nilvana dos Santos. "Comparação da transformada wavelet discreta e da transformada do cosseno, para compressão de imagens de impressão digital." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-27042007-101810/.

Full text
Abstract:
Este trabalho tem por objetivo comparar os seguintes métodos de compressão de imagens de impressão digital: transformada discreta do cosseno (DCT), transformada de wavelets de Haar, transformada de wavelets de Daubechies e transformada de wavelets de quantização escalar (WSQ). O propósito da comparação é identificar o método que resulta numa menor perda de dados, para a maior taxa de compressão possível. São utilizadas as seguintes métricas para avaliação da qualidade da imagem para os métodos: erro quadrático médio (ERMS), a relação sinal e ruído (SNR) e a relação sinal ruído de pico (PSNR). Para as métricas utilizadas a DCT apresentou os melhores resultados, seguida pela WSQ. No entanto, o melhor tempo de compressão e a melhor qualidade das imagens recuperadas avaliadas pelo software GrFinger 4.2, foram obtidos com a técnica WSQ.
This research aims to compare the following fingerprint image compression methods: the discrete cosseno transform (DCT), Haar wavelet transform, Daubechies wavelets transform and wavelet scalar quantization (WSQ). The main interest is to find out the technique with the smallest distortion and higher compression ratio. Image quality is measured using peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR) and root mean square (ERMS). Image quality using these metrics showed best results for the DCT followed by WSQ, although the WSQ had the best compression time and presented the best quality when evaluated by the GrFinger 4.2 software.
APA, Harvard, Vancouver, ISO, and other styles
8

Khurram, Jassal Muhammad. "The Effect of Optimization of Error Metrics." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20471.

Full text
Abstract:
It is important for a retail company to forecast its sale in correct and accurate way to be ableto plan and evaluate sales and commercial strategies. Various forecasting techniques areavailable for this purpose. Two popular modelling techniques are Predictive Modelling andEconometric Modelling. The models created by these techniques are used to minimize thedifference between the real and the predicted values. There are several different errormetrics that can be used to measure and describe the difference. Each metric focuses ondifferent properties in the forecasts and it is hence important which metrics that is used whena model is created. Most traditional techniques use the sum of squared error which havegood mathematical properties but is not always optimal for forecasting purposes. This thesisfocuses on optimization of three widely used error metrics MAPE, WMAPE and RMSE.Especially the metrics protection against overfitting, which occurs when a predictive modelcatches noise and irregularities in the data, that is not part of the sought relationship, isevaluated in this thesis.Genetic Programming, a general optimization technique based on Darwin’s theories ofevolution. In this study genetic programming is used to optimize predictive models based oneach metrics. The sales data of five products of ICA (a Swedish retail company) has beenused to observe the effects of the optimized error metrics when creating predictive models.This study shows that all three metrics are quite poorly protected against overfitting even ifWMAPE and MAPE are slightly better protected than MAPE. However WMAPE is the mostpromising metric to use for optimization of predictive models. When evaluated against allthree metrics, models optimized based on WMAPE have the best overall result. The results oftraining and test data shows that the results hold in spite of overfitted models.
Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
9

Laskauskas, Ramūnas. "Vaizdo kontūrų nustatymo būdų analizė." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080929_113638-76811.

Full text
Abstract:
Vaizdo kontūrų nustatymo metodų tyrimui buvo pasirinktas 100 įvairaus turinio paveikslų su įvairiu elementų dydžiu ir skaičiumi. Tyrimui buvo pasirinkti 8 populiariausi vaizdo kontūrų nustatymo metodai: Canny, Sobel, Prewitt, Roberts, Zerocross, Laplacian, LoG, Marr-Hildreth. Atliekant tyrimus visiems paveikslams, naudojant visus 8 metodus, buvo subjektyviai parinkta optimaliausia slenkstinė reikšmė. Gavus visų 100 įvairių paveikslų geriausias slenkstines reikšmes su visais 8 metodais, buvo nustatytos slenkstinių reikšmių kitimo ribos kiekvienam kontūro išskyrimo metodui. Kiekvienam paveikslui buvo pritaikyta vidutiniškai 10 slenkstinių reikšmių ir kiekvienam paveikslui buvo suskaičiuotas vidutinis kvadratinis nuokrypis (RMSE, Root Mean Square Error) su geriausiu pasirinktu kontūru.
One hundred various pictures with different size and number of elements were chosen for the method research of image outline evaluation. All these pictures were converted into grayscale pictures. Most of edge detection methods (filters) required to be blurred to reduce noise. Eight the most popular methods were chosen to evaluate the image outline: Canny, Sobel, Prewitt, Roberts, Zerocross, Laplacian, LoG, Marr-Hildreth. A Root Mean Square Error (RMSE) was computed for each edge picture with the best-chosen outline.
APA, Harvard, Vancouver, ISO, and other styles
10

Morelli, Stefano. "Optimal pose selection for the calibration of an overconstrained Cable-Driven Parallel Robot." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
In this project an optimal pose selection method for the calibration of an overconstrained Cable-Driven Parallel robot is presented. This manipulator belongs to a subcategory of parallel robots, where the classic rigid "legs" are replaced by cables. Cables are flexible elements that bring advantages and disadvantages to the robot modeling. For this reason, there are many open research issues, and the calibration of geometric parameters is one of them. The identification of the geometry of a robot, in particular, is usually called Kinematic Calibration. Many methods have been proposed in the past years for the solution of the latter problem. Although these methods are based on calibration using different kinematic models, when the robot’s geometry becomes more complex, their robustness and reliability decrease. This fact makes the selection of the calibration poses more complicated. The position and the orientation of the endeffector in the workspace become important in terms of selection. Thus, in general, it is necessary to evaluate the robustness of the chosen calibration method, by means, for example, of a parameter such as the observability index. In fact, it is known from the theory, that the maximization of the above mentioned index identifies the best choice of calibration poses, and consequently, using this pose set may improve the calibration process. The objective of this thesis is to analyze optimization algorithms which aim to calculate an optimal choice of poses both in quantitative and qualitative terms. Quantitatively, because it is of fundamental importance to understand how many poses are needed. Not necessarily a greater number of poses leads to a better result. Qualitatively, because it is useful to understand if the selected combination of poses actually gives additional information in the process of the identification of the parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Thornton, Victor. "DETERMINING TIDAL CHARACTERISTICS IN A RESTORED TIDAL WETLAND USING UNMANNED AERIAL VEHICLES AND DERIVED DATA." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5369.

Full text
Abstract:
Unmanned aerial vehicle (UAV) technology was used to determine tidal extent in Kimages Creek, a restored tidal wetland located in Charles City County, Virginia. A Sensefly eBee Real-Time Kinematic UAV equipped with the Sensor Optimized for Drone Applications (SODA) camera (20-megapixel RGB sensor) was flown during a single high and low tide event in Summer 2017. Collectively, over 1,300 images were captured and processed using Pix4D. Horizontal and vertical accuracy of models created using ground control points (GCP) ranged from 0.176 m to 0.363 m. The high tide elevation model was subtracted from the low tide using the ArcMap 10.5.1 raster calculator. The positive difference was displayed to show the portion of high tide that was above the low tide. These results show that UAVs offer numerous spatial and temporal advantages, but further research is needed to determine the best method of GCP placement in areas of similar forest structure.
APA, Harvard, Vancouver, ISO, and other styles
12

Naraharisetti, Sahasan. "Region aware DCT domain invisible robust blind watermarking for color images." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9748/.

Full text
Abstract:
The multimedia revolution has made a strong impact on our society. The explosive growth of the Internet, the access to this digital information generates new opportunities and challenges. The ease of editing and duplication in digital domain created the concern of copyright protection for content providers. Various schemes to embed secondary data in the digital media are investigated to preserve copyright and to discourage unauthorized duplication: where digital watermarking is a viable solution. This thesis proposes a novel invisible watermarking scheme: a discrete cosine transform (DCT) domain based watermark embedding and blind extraction algorithm for copyright protection of the color images. Testing of the proposed watermarking scheme's robustness and security via different benchmarks proves its resilience to digital attacks. The detectors response, PSNR and RMSE results show that our algorithm has a better security performance than most of the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Chungbaek, Youngyun. "Impacts of Ignoring Nested Data Structure in Rasch/IRT Model and Comparison of Different Estimation Methods." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77086.

Full text
Abstract:
This study involves investigating the impacts of ignoring nested data structure in Rasch/1PL item response theory (IRT) model via a two-level and three-level hierarchical generalized linear model (HGLM). Currently, Rasch/IRT models are frequently used in educational and psychometric researches for data obtained from multistage cluster samplings, which are more likely to violate the assumption of independent observations of examinees required by Rasch/IRT models. The violation of the assumption of independent observation, however, is ignored in the current standard practices which apply the standard Rasch/IRT for the large scale testing data. A simulation study (Study Two) was conducted to address this issue of the effects of ignoring nested data structure in Rasch/IRT models under various conditions, following a simulation study (Study One) to compare the performances of three methods, such as Penalized Quasi-Likelihood (PQL), Laplace approximation, and Adaptive Gaussian Quadrature (AGQ), commonly used in HGLM in terms of accuracy and efficiency in estimating parameters. As expected, PQL tended to produce seriously biased item difficulty estimates and ability variance estimates whereas almost unbiased for Laplace or AGQ for both 2-level and 3-level analysis. As for the root mean squared errors (RMSE), three methods performed without substantive differences for item difficulty estimates and ability variance estimates in both 2-level and 3-level analysis, except for level-2 ability variance estimates in 3-level analysis. Generally, Laplace and AGQ performed similarly well in terms of bias and RMSE of parameter estimates; however, Laplace exhibited a much lower convergence rate than that of AGQ in 3-level analyses. The results from AGQ, which produced the most accurate and stable results among three computational methods, demonstrated that the theoretical standard errors (SE), i.e., asymptotic information-based SEs, were underestimated by at most 34% when 2-level analyses were used for the data generated from 3-level model, implying that the Type I error rate would be inflated when the nested data structures are ignored in Rasch/IRT models. The underestimated theoretical standard errors were substantively more severe as the true ability variance increased or the number of students within schools increased regardless of test length or the number of schools.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Cascavilla, Francesco Paolo. "Sull'impiego di dati telerilevati per la stima del regime idrometrico in sezioni fluviali non strumentate." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
La crescente disponibilità di dati da telerilevamento satellitare apre la strada a nuove possibilità di monitoraggio delle grandezze idrauliche (es., livelli idrici, umidità del suolo, estensione delle aree bagnate), rappresentando una risorsa di interesse, specie per le aree scarsamente monitorate mediante tradizionali strumentazioni. La letteratura scientifica oggi riporta numerosi sforzi volti a dimostrare la possibilità di sfruttare tali misure per la caratterizzazione delle portate, e del regime idrologico, dei corsi d’acqua monitorati. Lo scopo di questa tesi è valutare le potenzialità della missione SWOT (Surface Water Ocean Topography; https://swot.jpl.nasa.gov/; in orbita dal 2021) per la stima delle portate di deflusso sulla base della misura del livello idrico in alveo. Le analisi sono condotte lungo il Fiume Po facendo ricorso alla stima delle portate mediante tre approcci semplificati, e riproducendo le misure attese dal satellite mediante un approccio statistico di tipo Monte Carlo. Le portate stimate sono quindi confrontate con le portate osservate presso tre distinte stazioni di misura lungo il Fiume: Borgoforte, Sermide e Pontelagoscuro. Il confronto mette in evidenza un’adeguata stima delle portate partendo dai livelli attesi da SWOT, ottenendo valori significativi di efficienza di Nash-Sutcliffe (efficienza compresa tra 0.86 e 0.94), superiori a quelli attesi dall’utilizzo di livelli altimetrici ad oggi disponibili da altri satelliti.
APA, Harvard, Vancouver, ISO, and other styles
15

Bajracharya, Dinesh. "Econometric Modeling vs Artificial Neural Networks : A Sales Forecasting Comparison." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20400.

Full text
Abstract:
Econometric and predictive modeling techniques are two popular forecasting techniques. Both ofthese techniques have their own advantages and disadvantages. In this thesis some econometricmodels are considered and compared to predictive models using sales data for five products fromICA a Swedish retail wholesaler. The econometric models considered are regression model,exponential smoothing, and ARIMA model. The predictive models considered are artificialneural network (ANN) and ensemble of neural networks. Evaluation metrics used for thecomparison are: MAPE, WMAPE, MAE, RMSE, and linear correlation. The result of this thesisshows that artificial neural network is more accurate in forecasting sales of product. But it doesnot differ too much from linear regression in terms of accuracy. Therefore the linear regressionmodel which has the advantage of being comprehensible can be used as an alternative to artificialneural network. The results also show that the use of several metrics contribute in evaluatingmodels for forecasting sales.
Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
16

Cowmeadow, Rebecca. "Posizionamento relativo tramite tecnologia UWB di un braccio automatico." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24531/.

Full text
Abstract:
La presente tesi si pone l'obiettivo di studiare le prestazioni della tecnologia a banda ultra-larga nell'ambito del posizionamento radio, cosicché dei nodi installati su un braccio automatico riescano a permettere a quest'ultimo di individuare degli oggetti a brevi distanze con la più elevata accuratezza. In prima istanza sono stati analizzati diversi aspetti dei sistemi di posizionamento radio (metodi di localizzazione, tecnologie etc.), poi è stata descritta la tecnologia UWB e, successivamente, sono stati descriti i parametri utilizzati per valutare le prestazioni del sistema (GDOP, PEB, RMSE). Dopo aver effettuato anche un'analisi teorica delle prestazioni, è stato introdotto il sistema di misure utilizzato nell'applicazione ed infine sono stati riportati e analizzati i risultati delle misure ritenuti più rilevanti.
APA, Harvard, Vancouver, ISO, and other styles
17

Kwon, Hyukje. "A Monte Carlo Study of Missing Data Treatments for an Incomplete Level-2 Variable in Hierarchical Linear Models." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1303846627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Boselli, Luca. "Utilizzo della simulazione dinamica per l'ottimizzazione delle logiche di controllo degli impianti tecnici a servizio di un centro per grande distribuzione." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
L'obiettivo di questo progetto, svolto in collaborazione con lo studio tecnico Caster, è stato quello di modificare e tarare un modello energetico di un centro per la grande distribuzione non alimentare a seguito della realizzazione di alcuni interventi di riqualificazione energetica. Per la taratura del modello si sono utilizzate informazioni pervenute a seguito di sopralluoghi e interviste col personale oltre che derivanti da una campagna di monitoraggio dei consumi avvenuta dal 1 aprile al 31 agosto 2018. Per la valutazione del modello si sono utilizzati il NMBE e il CV(RMSE), con riferimento a quanto suggerito dall'Ashrae Guideline 14. Sul modello tarato si sono infine svolte delle analisi allo scopo di ottimizzare le logiche di funzionamento degli impianti tecnici. Il programma di ottimizzazione utilizzato è GenOpt. Si è ricercata la condizione operativa che ottimizzasse il comfort termico, valutato per mezzo dei due indici PMV e PPD. Questi vengono calcolati per mezzo di un type creato appositamente per questo progetto. Si è cercato poi di ottimizzare il funzionamento degli impianti allo scopo di ridurre i consumi energetici pur rimanendo all'interno degli intervalli delle tre classi di comfort termico. Dall'analisi dei risultati delle simulazioni di Trnsys si è infine ricercata una funzione correlatrice con la quale gestire in modo ottimale l'accensione degli impianti in funzione delle condizioni climatiche esterne ed interne all'edificio.
APA, Harvard, Vancouver, ISO, and other styles
19

Thomas, Robin Rajan. "Optimisation of adaptive localisation techniques for cognitive radio." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/27076.

Full text
Abstract:
Spectrum, environment and location awareness are key characteristics of cognitive radio (CR). Knowledge of a user’s location as well as the surrounding environment type may enhance various CR tasks, such as spectrum sensing, dynamic channel allocation and interference management. This dissertation deals with the optimisation of adaptive localisation techniques for CR. The first part entails the development and evaluation of an efficient bandwidth determination (BD) model, which is a key component of the cognitive positioning system. This bandwidth efficiency is achieved using the Cramer-Rao lower bound derivations for a single-input-multiple-output (SIMO) antenna scheme. The performances of the single-input-single-output (SISO) and SIMO BD models are compared using three different generalised environmental models, viz. rural, urban and suburban areas. In the case of all three scenarios, the results reveal a marked improvement in the bandwidth efficiency for a SIMO antenna positioning scheme, especially for the 1×3 urban case, where a 62% root mean square error (RMSE) improvement over the SISO system is observed. The second part of the dissertation involves the presentation of a multiband time-of arrival (TOA) positioning technique for CR. The RMSE positional accuracy is evaluated using a fixed and dynamic bandwidth availability model. In the case of the fixed bandwidth availability model, the multiband TOA positioning model is initially evaluated using the two-step maximum-likelihood (TSML) location estimation algorithm for a scenario where line-of-sight represents the dominant signal path. Thereafter, a more realistic dynamic bandwidth availability model has been proposed, which is based on data obtained from an ultra-high frequency spectrum occupancy measurement campaign. The RMSE performance is then verified using the non-linear least squares, linear least squares and TSML location estimation techniques, using five different bandwidths. The proposed multiband positioning model performs well in poor signal-to-noise ratio conditions (-10 dB to 0 dB) when compared to a single band TOA system. These results indicate the advantage of opportunistic TOA location estimation in a CR environment.
Dissertation (MEng)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
20

Mohamed, Mostafa. "Evaluation de la qualité des modèles 3D de bâtiments en photogrammétrie numérique aérienne." Thesis, Strasbourg, 2013. http://www.theses.fr/2013STRAD037/document.

Full text
Abstract:
Les méthodes et les outils de génération automatique ou semi-automatique de modèles 3D urbains se développent rapidement, mais l’évaluation de la qualité de ces modèles et des données spatiales sur lesquelles ils s’appuient n’est que rarement abordée. Notre objectif est de proposer une approche multidimensionnelle standard pour évaluer la qualité des modèles 3D de bâtiments en 1D, 2D et 3D. Deux méthodes sont présentées pour l'évaluation 1D. La première se base sur l’analyse de l’erreur moyenne quadratique en X, Y et Z. La deuxième solution s’appuie sur les instructions parues au Journal Officiel du 30 octobre 2003 et exigeant le respect de classes de précisions. L'approche que nous proposons se penche sur le calcul d'indices de qualité fréquemment rencontrés dans la littérature. L'originalité de notre approche réside dans le fait que les modèles employés en entrée ne se limitent pas au mode raster, mais s'étendent au mode vecteur. Il semble évident que les modèles définis en mode vecteur s'avèrent plus fidèles à la réalité qu'en mode raster. Les indices de qualité 2D et 3D calculés montrent que les modèles 3D de bâtiments extraits à partir des couples d’images stéréoscopiques sont cohérents. Les modèles reconstruits à partir du LiDAR sont moins exacts. En conclusion, cette thèse a abouti à l’élaboration d’une approche d’évaluation multidimensionnelle de bâtiments en 3D. L’approche proposée dans cette thèse est adaptée et opérationnelle pour des modèles vectoriels et rasters de bâtiments 3D simplifiés
Methods and tools for automatic or semi-automatic generation of 3D city models are developing rapidly, but the quality assessment of these models and spatial data are rarely addressed. A comprehensive evaluation in 3D is not trivial. Our goal is to provide a standard multidimensional approach for assessing the quality of 3D models of buildings in 1D, 2D and 3D. Two methods are applied. The first one is done by computing Root Mean Square Errors (RMSE) based on the deviations between both models (reference and test), in X, Y and Z directions. Second method is performed by applying the French legal text (arrêté sur les classes de précision) that is based on the instructions published in the Official Journal from October 30, 2003. These indices pass through the space discretization in pixels or voxels for measuring the degree of superposition of 2D or 3D objects. The originality of this approach is built on the fact that the models used as input are not only limited to raster format, but also extended to vector format. The results of statistics of the quality indices calculated for assessing the building models show that the 3D building models extracted from stereo-pairs are close from each other. Also, the models reconstructed from LiDAR are less accurate than the models reconstructed from aerial images alone. In conclusion, the quality evaluation of 3D building models has been achieved by applying the proposed multi-dimensional approach. This approach is suitable for simplified 3D building vector models created from aerial images and/or LiDAR datasets
APA, Harvard, Vancouver, ISO, and other styles
21

Fioresi, Adriano. "A new method to characteriz e monitoring platforms for dynamic distribution systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12387/.

Full text
Abstract:
Nell'esercizio della Distribuzione Elettrica, molte applicazioni come il controllo dei ussi di potenza e la stabilità della tensione, richiedono la conoscenza dello stato della rete. Queste informazioni possono essere ottenute aggregando misure disponibili della rete, seppur di diversa natura, e applicando ad esse algoritmi di Stima di Stato. Diversi fattori possono avere un impatto sull'incertezza della stima di stato nel caso in cui alcune grandezze della rete stiano evolvendo dinamicamente. Lo scopo di questo lavoro è quello di identificare i diversi contributi che incidono sulla accuratezza del risultato della Stima di Stato globale durante il verificarsi di eventi di natura dinamica. Tra questi si citano quali veloci variazioni della potenza generata da parte di fonti di energia rinnovabili e uttuazioni della domanda di potenza da parte clienti della rete. I test eseguiti evidenziano l'importanza del porre particolare attenione agli effetti portati da queste dinamiche allo scopo di identificare le incertezze che affliggono il sistema di monitoraggio della rete.
APA, Harvard, Vancouver, ISO, and other styles
22

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

KRISHNA, CHARU. "IMAGE ENHANCEMENT USING ENTROPY MAXIMIZATION." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15093.

Full text
Abstract:
Image enhancement is one of the most interesting domain of image processing. One of the techniques of image enhancement is contrast enhancement. Contrast enhancement in image processing is a very important technique. One of the most vastly used techniques for contrast enhancement is histogram equalization. It enhances the contrast of the input image by mapping the intensity levels based on the input probability distribution function of the image. HE finds application in many fields like medical image processing. HE, in general flats the histogram of the image and thus enhances the contrast of the input image. HE the results in the stretching of dynamic range. Although HE has low computational cost and has high performance, it is rarely used in electronic appliances as its straight use changes the original brightness of the image and hence result in distorted image. Histogram equalized image may also result in annoying artifacts and noise. Hence many techniques for contrast enhancement were developed in order to overcome the defects of HE. These include Bi-histogram brightness preserving histogram (BBHE), Dualistic sub image brightness preserving histogram(DSIH), Minimum mean brightness error bi-histogram equalization(MMBEBHE), Recursive Mean Separate Histogram Equalization(RMSHE),Entropy maximization histogram modification technique(EMHM)etc. The proposed algorithm show better results as compared to other algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Hongcheng. "Recognition, mining, synthesis and estimation (RMSE) for large-scale visual data using multilinear models /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3250341.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Source: Dissertation Abstracts International, Volume: 68-02, Section: B, page: 1202. Adviser: Narendra Ahuja. Includes bibliographical references (leaves 106-112) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
25

Tzeng, Han-Wei, and 曾漢偉. "Research on Comparison of Algorithms and Optimal Array Ratios for One-dimensional Phase Interferometer Based on Minimizing RMSE." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/8787na.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
106
In the context of electronic warfare, electronic counter-devices on military vehicles must achieve high sensitivity and high direction-finding accuracy for radio waves, and can effectively combat advanced radar equipment. Because of its low computational complexity and simple principle, the phase interferometer algorithm is often used in direction finding technology. Direction of the signal is estimated by the phase difference received between two antennas and uses interferometer algorithm to solve ambiguity. In the one-dimensional phase interferometer nonlinear antenna array system, If the baseline is lengthened, the accuracy of the angle of the estimated signal can be more accurate, but the probability of ambiguity will also increase. At this time, we have to choose the best trade-off to get lowest root mean square error (RMSE). We use mathematical derivation to propose an RMSE formula for algorithms. By using the RMSE formula, we can simulate and choose the optimal array ratio between the receivers in the AWGN channel. In addition, we choose three different algorithms according to the difference in the value of the angle of the direction. By deriving its RMSE formula and comparing its performance, we can use this RMSE formula to compare the advantages and disadvantages. Finally, the optimal array ratio of RMSE minimization is proposed by this RMSE formula.
APA, Harvard, Vancouver, ISO, and other styles
26

Kamranzadeh, Vahideh. "An Energy Analysis Of A Large, Multipurpose Educational Building In A Hot Climate." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10479.

Full text
Abstract:
In this project a steady-state building load for Constant Volume Terminal Reheat (CVTR), Dual Duct Constant Volume (DDCV) and Dual Duct Variable Air Volume (DDVAV) systems for the Zachry Engineering Building has been modeled. First, the thermal resistance values of the building structure have been calculated. After applying some assumptions, building characteristics were determined and building loads were calculated using the diversified loads calculation method. By having the daily data for six months for the Zachry building, the input to the CVTR, DDCV and DDVAV Microsoft Excel code were prepared for starting the simulation. The air handling units for the Zachry building are Dual Duct Variable Air Volume (DDVAV) systems. The calibration procedure has been used to compare the calibration signatures with characteristic signatures in order to determine which input variables need to be changed to achieve proper calibration. Calibration signatures are the difference between measured energy consumption and simulated energy consumption as a function of temperature. Characteristic signatures are the energy consumption as a function of temperature obtained by changing the value of input variables of the system. The base simulated model of the DDVAV system has been changed according to the characteristic signatures of the building and adjusted to get the closest result to the measured data. The simulation method for calibration could be used for energy audits, improving energy efficiency, and fault detection. In the base model of DDVAV, without any changes in the input, the chilled water consumption had an Root Mean Square Error (RMSE) of 56.705577 MMBtu/day and an Mean Bias Error (MBE) of 45.763256 MMBtu/day while hot water consumption had an RMSE of 1.9072574 MMBtu/day and an MBE of 45.763256 MMBtu/day. In the calibration process, system parameters such as zone temperature, cooling coil temperature, minimum supply air and minimum outdoor air have been changed. The decisions for varying the parameters were based on the characteristic signatures provided in the project. After applying changes to the system parameters, RMSE and MBE for both hot and cold water consumption were significantly reduced. After changes were applied, chilled water consumption had an RMSE of 12.749868 MMBtu/day and an MBE of 3.423188 MMBtu/day, and hot water consumption had an RMSE of 1.6790 MMBtu/day and an MBE 0.12513 of MMBtu/day.
APA, Harvard, Vancouver, ISO, and other styles
27

Mashabela, Mahlageng Retang. "A comparison of some methods of modeling baseline hazard function in discrete survival models." Diss., 2019. http://hdl.handle.net/11602/1498.

Full text
Abstract:
MSc (Statistics)
Department of Statistics
The baseline parameter vector in a discrete-time survival model is determined by the number of time points. The larger the number of the time points, the higher the dimension of the baseline parameter vector which often leads to biased maximum likelihood estimates. One of the ways to overcome this problem is to use a simpler parametrization that contains fewer parameters. A simulation approach was used to compare the accuracy of three variants of penalised regression spline methods in smoothing the baseline hazard function. Root mean squared error (RMSE) analysis suggests that generally all the smoothing methods performed better than the model with a discrete baseline hazard function. No single smoothing method outperformed the other smoothing methods. These methods were also applied to data on age at rst alcohol intake in Thohoyandou. The results from real data application suggest that there were no signi cant di erences amongst the estimated models. Consumption of other drugs, having a parent who drinks, being a male and having been abused in life are associated with high chances of drinking alcohol very early in life.
NRF
APA, Harvard, Vancouver, ISO, and other styles
28

Silva, Gilson Inácio Duarte Soares. "Comparação de métodos numéricos para avaliação de opções exóticas sobre um activo subjacente." Master's thesis, 2011. http://hdl.handle.net/1822/25869.

Full text
Abstract:
Dissertação de mestrado em Matemática Económica e Financeira
A avaliação de derivados financeiros tem sido uma das áreas mais estudadas no campo das Finanças. Alguns derivados financeiros, nomeadamente opções, não têm uma solução exacta para determinar o seu valor (Hull & White, 1988; Vecer, 2001). Nestes casos, é necessário recorrer a métodos numéricos para efectuar a sua avaliação. Neste sentido, este trabalho propõe-se a estudar e comparar três métodos numéricos utilizados na metodologia de avaliação de opções vanilla e opções asiáticas: Modelo Binomial, Método das Diferenças Finitas e Simulação Monte Carlo. A comparação é feita através da análise da velocidade de computação e da precisão de avaliação das opções. Os resultados obtidos mostram uma superioridade dos modelos Binomial e Diferenças Finitas em relação à Simulação Monte Carlo na avaliação de opções vanilla. Devido à maior complexidade do payoff das opções asiáticas americanas, os resultados mostram que a Simulação Monte Carlo torna-se preferível aos demais modelos.
The valuation of financial derivatives has been one of most studied areas in Finance field. Some financial derivatives, namely options, have no close-form to evaluate its value (Hull & White, 1988; Vecer, 2001). In cases like these, it's necessary to use Numerical Methods to do the valuation. This work propose to study and compare three common methodology used in valuation of vanilla option and Asian option: Binomial Model, Finite Difference Method and Monte Carlo Simulation. The comparison is made by analysing the computing speed and the accuracy of the valuation of the options. The results show that Binomial Model and Finite Difference Methods are better in pricing vanilla options than Monte Carlo Simulation. Because of the complexity of the payoff of american asian options, Monte Carlo Simulation is superior to the others methods.
L’évaluation des dérivés financiers est devenue un des domaines le plus étudiées dans le champ des Finances. Quelques dérivés financiers, notamment les options, n’ont pas de solutions exacte pour déterminer leurs valeurs (Hull & White, 1988; Vecer, 2001). Dans ces cas, il est nécessaire de se soutenir des Méthodes Numériques pour faire leurs évaluations. À cet égard, ce travail se propose à étudier et comparer trois méthodes numériques utilisées dans la méthodologie d’évaluation des options vanilla et des options asiatiques: Modèle Binominal, Méthode des Différences Finîtes et Simulation Monte Carlo. La comparaison est faîtes à travers l’analyse de la vitesse de computation et de la précision de l’évaluation des options. Les résultats montrent une supériorité des Modèle Binominal et de Différences Finîtes par rapport à la Simulation Monte Carlo, dans l’évaluation des options vanilla. À cause de la plus grande complexité de payoff des options asiatiques américaines, les résultats montrent que la simulation Monte Carlo devient, alors, supérieure aux autres modèles.
Avaliaçon di derivadus financeiro ê um di kes área mas studadu na campu di Finanças. Alguns derivadus financeiro, numeadamenti opçons, ka tem soluçon exactu pa determina sê valor (Hull & White, 1988; Vecer, 2001). Na kes kasu li, ê neçessáriu ricorri a Métudus Numéricus pa fazi sês avaliaçon. Na kel sentidu li, ês trabadju ta pritendi studa e compara tres métudu utilizadu na metodulugia di avaliaçon di opçons vanilla e asiátikas: Mudelu Binumial, Métudu di Diferenças Finitas e Simulaçon Monte Carlu. Sês comparaçon ê fetu atravêz di analisi di velocidadi di computaçon e precison di avaliaçon di opçons. Resultadus ta mostra um superioridadi di modelus Binomial e Diferenças Finitas em relaçon a Simulaçon Monte Carlo na avalia opçons vanilla. Pamodi, maior complexidadi di payoff di opçons asiátikas amerikanas, resultadus ta mostra ma Simulaçon Monte Carlo ta torna superior a kes otus mudelus.
APA, Harvard, Vancouver, ISO, and other styles
29

Molapo, Mojalefa Aubrey. "Employing Bayesian Vector Auto-Regression (BVAR) method as an altenative technique for forecsating tax revenue in South Africa." Diss., 2017. http://hdl.handle.net/10500/25083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography