Literatura científica selecionada sobre o tema "Estimation of parameters tool"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Estimation of parameters tool".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Estimation of parameters tool"

1

Cortés-Benito, I., H. Rodríguez-Cortés, M. Martínez-Ramírez, Y. Tlatelpa-Osorio e J. G. Romero. "Quadrotor physical parameters online estimation". Memorias del Congreso Nacional de Control Automático 5, n.º 1 (17 de outubro de 2022): 133–39. http://dx.doi.org/10.58571/cnca.amca.2022.005.

Texto completo da fonte
Resumo:
Online estimation of unmanned aerial vehicles' physical parameters is an essential tool for aerodynamic and control design. This paper numerically evaluates two methods for physical parameter estimation for a quadrotor. The estimation methods are based on recently introduced techniques that relax the persistency of excitation constraint in the least-squares estimation methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Balakrishnan, P., e M. F. DeVries. "Sequential Estimation of Machinability Parameters for Adaptive Optimization of Machinability Data Base Systems". Journal of Engineering for Industry 107, n.º 2 (1 de maio de 1985): 159–66. http://dx.doi.org/10.1115/1.3185980.

Texto completo da fonte
Resumo:
Mathematical model type machinability data base systems require suitable model building procedures to estimate the model parameters. The estimation procedure should be capable of using subjective prior information about the models and must also be capable of adapting the model parameters to the particular machining environment for which the data are needed. In this paper, the sequential Maximum A Posteriori (MAP) estimation procedure is proposed as the mathematical tool for performing these functions. Mathematical details of this estimation procedure are presented. The advantages of this method over conventional regression analysis are discussed based on the analysis of an experimental tool life data set. Details regarding the selection of the various initial values needed for starting the sequential procedure are presented. The use of prior information about the models in order to improve the parameter estimates is investigated. The adaptive capability of the procedure is analyzed using simulated tool life data. The results of this analysis indicate that the proposed sequential estimation procedure is a valuable tool for estimating machinability parameters and for the adaptive optimization of machinability data base systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Shamine, D. M., S. W. Hong e Y. C. Shin. "Experimental Identification of Dynamic Parameters of Rolling Element Bearings in Machine Tools". Journal of Dynamic Systems, Measurement, and Control 122, n.º 1 (11 de maio de 1998): 95–101. http://dx.doi.org/10.1115/1.482432.

Texto completo da fonte
Resumo:
In-situ identification is essential for estimating bearing joint parameters involved in spindle systems because of the inherent interaction between the bearings and spindle. This paper presents in-situ identification results for rolling element bearing parameters involved in machine tools by using frequency response functions (FRF’s). An indirect estimation technique is used for the estimation of unmeasured FRF’s, which are required for identification of joint parameters but are not available. With the help of an index function, which is devised for indicating the quality of estimation or identification at a particular frequency, the frequency region appropriate for identification is selected. Experiments are conducted on two different machine tool spindles. Repeatable and accurate joint coefficients are obtained for both machine tool systems. [S0022-0434(00)02501-6]
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Rajeev, D., D. Dinakaran e S. C. E. Singh. "Artificial neural network based tool wear estimation on dry hard turning processes of AISI4140 steel using coated carbide tool". Bulletin of the Polish Academy of Sciences Technical Sciences 65, n.º 4 (1 de agosto de 2017): 553–59. http://dx.doi.org/10.1515/bpasts-2017-0060.

Texto completo da fonte
Resumo:
AbstractNowadays, finishing operation in hardened steel parts which have wide industrial applications is done by hard turning. Cubic boron nitride (CBN) inserts, which are expensive, are used for hard turning. The cheaper coated carbide tool is seen as a substitute for CBN inserts in the hardness range (45–55 HRC). However, tool wear in a coated carbide tool during hard turning is a significant factor that influences the tolerance of machined surface. An online tool wear estimation system is essential for maintaining the surface quality and minimizing the manufacturing cost. In this investigation, the cutting tool wear estimation using artificial neural network (ANN) is proposed. AISI4140 steel hardened to 47 HRC is used as a work piece and a coated carbide tool is the cutting tool. Experimentation is based on full factorial design (FFD) as per design of experiments. The variations in cutting forces and vibrations are measured during the experimentation. Based on the process parameters and measured parameters an ANN-based tool wear estimator is developed. The wear outputs from the ANN model are then tested. It was observed that as the model using ANN provided quite satisfactory results, and that it can be used for online tool wear estimation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Panić, Branislav, Jernej Klemenc e Marko Nagode. "Improved Initialization of the EM Algorithm for Mixture Model Parameter Estimation". Mathematics 8, n.º 3 (7 de março de 2020): 373. http://dx.doi.org/10.3390/math8030373.

Texto completo da fonte
Resumo:
A commonly used tool for estimating the parameters of a mixture model is the Expectation–Maximization (EM) algorithm, which is an iterative procedure that can serve as a maximum-likelihood estimator. The EM algorithm has well-documented drawbacks, such as the need for good initial values and the possibility of being trapped in local optima. Nevertheless, because of its appealing properties, EM plays an important role in estimating the parameters of mixture models. To overcome these initialization problems with EM, in this paper, we propose the Rough-Enhanced-Bayes mixture estimation (REBMIX) algorithm as a more effective initialization algorithm. Three different strategies are derived for dealing with the unknown number of components in the mixture model. These strategies are thoroughly tested on artificial datasets, density–estimation datasets and image–segmentation problems and compared with state-of-the-art initialization methods for the EM. Our proposal shows promising results in terms of clustering and density-estimation performance as well as in terms of computational efficiency. All the improvements are implemented in the rebmix R package.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Santos, Tiago, Florian Lemmerich e Denis Helic. "Bayesian estimation of decay parameters in Hawkes processes". Intelligent Data Analysis 27, n.º 1 (30 de janeiro de 2023): 223–40. http://dx.doi.org/10.3233/ida-216283.

Texto completo da fonte
Resumo:
Hawkes processes with exponential kernels are a ubiquitous tool for modeling and predicting event times. However, estimating their decay parameter is challenging, and there is a remarkable variability among decay parameter estimates. Moreover, this variability increases substantially in cases of a small number of realizations of the process or due to sudden changes to a system under study, for example, in the presence of exogenous shocks. In this work, we demonstrate that these estimation difficulties relate to the noisy, non-convex shape of the Hawkes process’ log-likelihood as a function of the decay. To address uncertainty in the estimates, we propose to use a Bayesian approach to learn more about likely decay values. We show that our approach alleviates the decay estimation problem across a range of experiments with synthetic and real-world data. With our work, we support researchers and practitioners in their applications of Hawkes processes in general and in their interpretation of Hawkes process parameters in particular.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Khangura, RajbirKaur, Keya Sircar e DilpreetSingh Grewal. "Four odontometric parameters as a forensic tool in stature estimation". Journal of Forensic Dental Sciences 7, n.º 2 (2015): 132. http://dx.doi.org/10.4103/0975-1475.146367.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zhang, Liang, Wei Yang, Shuaifeng Zhi e Chen Yang. "Parameter Estimation Processor for K-distribution Clutter Based on Deep Learning". Journal of Physics: Conference Series 2290, n.º 1 (1 de junho de 2022): 012095. http://dx.doi.org/10.1088/1742-6596/2290/1/012095.

Texto completo da fonte
Resumo:
Abstract This paper concerns the problem of parameter estimating of K-distribution. In previous work only the shape parameter of K-distribution is estimated from which the scale parameter is calculated. Therefore, the accuracy of the estimated scale parameter is largely determined by the accuracy of shape parameter estimation. In order to decouple the estimation of scale and shape parameters, in this work, deep learning is considered as the main tool to achieve K-distribution parameters estimation as a regression task. Specifically, a parameter estimation processor combining CNN with LSTM is constructed. The ground truth of the two parameters are taken as labels, and the weighted losses of the two parameters construct the total loss function of the network training. The effectiveness and superiority of the proposed estimation processor are verified on the simulated data and the real sea clutter data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

SaraerToosi, Ali, e Avery E. Broderick. "Autoencoding Labeled Interpolator, Inferring Parameters from Image and Image from Parameters". Astrophysical Journal 967, n.º 2 (29 de maio de 2024): 140. http://dx.doi.org/10.3847/1538-4357/ad3e76.

Texto completo da fonte
Resumo:
Abstract The Event Horizon Telescope (EHT) provides an avenue to study black hole accretion flows on event-horizon scales. Fitting a semianalytical model to EHT observations requires the construction of synthetic images, which is computationally expensive. This study presents an image generation tool in the form of a generative machine-learning model, which extends the capabilities of a variational autoencoder. This tool can rapidly and continuously interpolate between a training set of images and can retrieve the defining parameters of those images. Trained on a set of synthetic black hole images, our tool showcases success in interpolating both black hole images and their associated physical parameters. By reducing the computational cost of generating an image, this tool facilitates parameter estimation and model validation for observations of black hole systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Elanayar, Sunil, e Yung C. Shin. "Robust Tool Wear Estimation With Radial Basis Function Neural Networks". Journal of Dynamic Systems, Measurement, and Control 117, n.º 4 (1 de dezembro de 1995): 459–67. http://dx.doi.org/10.1115/1.2801101.

Texto completo da fonte
Resumo:
In this paper, a unified method for constructing dynamic models for tool wear from prior experiments is proposed. The model approximates flank and crater wear propagation and their effects on cutting force using radial basis function neural networks. Instead of assuming a structure for the wear model and identifying its parameters, only an approximate model is obtained in terms of radial basis functions. The appearance of parameters in a linear fashion motivates a recursive least squares training algorithm. This results in a model which is available as a monitoring tool for online application. Using the identified model, a state estimator is designed based on the upperbound covariance matrix. This filter includes the errors in modeling the wear process, and hence reduces filter divergence. Simulations using the neural network for different cutting conditions show good results. Addition of pseudo noise during state estimation is used to reflect inherent process variabilities. Estimation of wear under these conditions is also shown to be accurate. Simulations performed using experimental data similarly show good results. Finally, experimental implementation of the wear monitoring system reveals a reasonable ability of the proposed monitoring scheme to track flank wear.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Teses / dissertações sobre o assunto "Estimation of parameters tool"

1

Chen, Xiaoming. "The development of a parameter estimation tool towards fault diagnosis". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1399563299.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Valkonen, Laura Elina. "The Sunyaev-Zel'dovich effect in galaxy clusters as a tool for estimating cosmological parameters". Thesis, University of Sussex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487558.

Texto completo da fonte
Resumo:
Clusters of galaxies provide us with a sensitive probe with which to study the Universe. Their mass function is strongly dependent on the cosmological parameters, which govern the dynamical evolution of the Universe and they also provide a representative sample of ~he Universal matter distribution. The Sunyaev-Zel'dovich effect (SZE) is a promising method for detecting clusters out to their form.ation redshift and has also been shown to be a good estimator for cluster masses, A combination. of X-ray and SZE data can also be used to measure the dista:nce to the cluster, independently of the cosmic distance ladder, allowing a measurement of the Hubble Constant. However, the success of SZE methods is highly dependent on a detailed understanding of the physics of galaxy clusters. We have undertaken a multi-wavelength survey of 8 ga:laxy clusters, the Viper Sunyaev-Zel'dovich Survey (VSZS), in order to assess and highlight the issues which may be encountered by upcoming large scale SZE surveys. Such surveys will not be able to study individual clus- . ters in great detail and will be reliant on the ·accuracy of scaling relations and assumed cluster models. \Ve have therefore imaged each cluster in our sample simultar;eously at three frequencies (150GHz, 220GHz and 280GHz) with the Arcminute Cosmology Bolometer Array Receiver (ACBAR), and have followed-up with X-ray observations (Chandra and XMM-Newton) and some optical observations (Gemini), in order to carry out a detailed analysis of the cluster ICM structure. We have made some of the highest significance detections of the SZE to date. Several clusters were detected at two frequencies, as a temperature increment at 280 GHz and a decrement at 150 GHz and some of these clusters were also resolved by the observations. Most of the VSZS sample were detected as SZE signals for the first time. Although Abell 3667 and lE0657-56 had been detected previously, these were now detected at two frequencies for the first time. We have added the results of the four fully analyzed VSZS clusters to the Y- T relation of Bonamente et al. (2007) and have found our points to lie well within the scatter of the relation, except fOr cluster A3112, which has possible radio source contamination. We have also found that cluster temperatures estimated from the Y - T relation are better overall at tracing the X-ray spectral temperature than the Lx - T derived temperatures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Sokrut, Nikolay. "The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment Management". Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Verbeek, Benjamin. "Maximum Likelihood Estimation of Hyperon Parameters in Python : Facilitating Novel Studies of Fundamental Symmetries with Modern Software Tools". Thesis, Uppsala universitet, Institutionen för materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446041.

Texto completo da fonte
Resumo:
In this project, an algorithm has been implemented in Python to estimate the parameters describing the production and decay of a spin 1/2 baryon - antibaryon pair. This decay can give clues about a fundamental asymmetry between matter and antimatter. A model-independent formalism developed by the Uppsala hadron physics group and previously implemented in C++, has been shown to be a promising tool in the search for physics beyond the Standard Model (SM) of particle physics. The program developed in this work provides a more user-friendly alternative, and is intended to motivate further use of the formalism through a more maintainable, customizable and readable implementation. The hope is that this will expedite future research in the area of charge parity (CP)-violation and eventually lead to answers to questions such as why the universe consists of matter. A Monte-Carlo integrator is used for normalization and a Python library for function minimization. The program returns an estimation of the physics parameters including error estimation. Tests of statistical properties of the estimator, such as consistency and bias, have been performed. To speed up the implementation, the Just-In-Time compiler Numba has been employed which resulted in a speed increase of a factor 400 compared to plain Python code.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Miró, Roig Antoni. "DYNAMIC MATHEMATICAL TOOLS FOR THE IDENTIFICATION OF REGULATORY STRUCTURES AND KINETIC PARAMETERS IN". Doctoral thesis, Universitat Rovira i Virgili, 2014. http://hdl.handle.net/10803/284043.

Texto completo da fonte
Resumo:
En aquesta tesi presentem una metodologia sistemàtica la qual permet caracteritzar sistemes biològics dinàmics a partir de dades de series temporals. Del treball desenvolupat se’n desprenen tres publicacions. En la primera desenvolupem un mètode d’optimització global determinista basat en l’outer approximation per a la estimació de paràmetres en sistemes biològics dinàmics. El nostre mètode es basa en la reformulació d’un conjunt d’equacions diferencials ordinàries al seu equivalent algebraic mitjançant l’ús de mètodes de col•locació ortogonal, donant lloc a un problema no convex programació no lineal (NLP). Aquest problema no convex NLP es descompon en dos nivells jeràrquics: un problema master de programació entera mixta (MILP) que proporciona una cota inferior rigorosa al solució global, i una NLP esclau d’espai reduït que dóna un límit superior. L’algorisme itera entre aquests dos nivells fins que un criteri de terminació es satisfà. En les publicacions segona i tercera vam desenvolupar un mètode que és capaç d’identificar l’estructura regulatòria amb els corresponents paràmetres cinètics a partir de dades de series temporals. En la segona publicació vam definir un problema d’optimització dinàmica entera mixta (MIDO) on minimitzem el criteri d’informació d’Akaike. En la tercera publicació vam adoptar una perspectiva MIDO multicriteri on minimitzem l’ajust i complexitat simultàniament mitjançant el mètode de l’epsilon constraint on un dels objectius es tracta com la funció objectiu mentre que la resta es converteixen en restriccions auxiliars. En ambdues publicacions els problemes MIDO es reformulen a programació entera mixta no lineal (MINLP) mitjançant la col•locació ortogonal en elements finits on les variables binàries s’utilitzem per modelar l’existència d’interaccions regulatòries.
En esta tesis presentamos una metodología sistemática que permite caracterizar sistemas biológicos dinámicos a partir de datos de series temporales. Del trabajo desarrollado se desprenden tres publicaciones. En la primera desarrollamos un método de optimización global determinista basado en el outer approximation para la estimación de parámetros en sistemas biológicos dinámicos. Nuestro método se basa en la reformulación de un conjunto de ecuaciones diferenciales ordinarias a su equivalente algebraico mediante el uso de métodos de colocación ortogonal, dando lugar a un problema no convexo de programación no lineal (NLP). Este problema no convexo NLP se descompone en dos niveles jerárquicos: un problema master de programación entera mixta (MILP) que proporciona una cota inferior rigurosa al solución global, y una NLP esclavo de espacio reducido que da un límite superior. El algoritmo itera entre estos dos niveles hasta que un criterio de terminación se satisface. En las publicaciones segunda y tercera desarrollamos un método que es capaz de identificar la estructura regulatoria con los correspondientes parámetros cinéticos a partir de datos de series temporales. En la segunda publicación definimos un problema de optimización dinámica entera mixta (MIDO) donde minimizamos el criterio de información de Akaike. En la tercera publicación adoptamos una perspectiva MIDO multicriterio donde minimizamos el ajuste y complejidad simultáneamente mediante el método del epsilon constraint donde uno de los objetivos se trata como la función objetivo mientras que el resto se convierten en restricciones auxiliares. En ambas publicaciones los problemas MIDO se reformulan a programación entera mixta no lineal (MINLP) mediante la colocación ortogonal en elementos finitos donde las variables binarias se utilizan para modelar la existencia de interacciones regulatorias.
In this thesis we present a systematic methodology to characterize dynamic biological systems from time series data. From the work we derived three publications. In the first we developed a deterministic global optimization method based on the outer approximation for parameter estimation in dynamic biological systems. Our method is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. In the second and third publications we developed a method that is able to identify the regulatory structure and its corresponding kinetic parameters from time series data. In the second publication we defined a mixed integer dynamic optimization problem (MIDO) which minimize the Akaike information criterion. In the third publication, we adopted a multi-criteria MIDO which minimize complexity and fit simultaneously using the epsilon constraint method in which one objective is treated as the objective function while the rest are converted to auxiliary constraints. In both publications MIDO problems were reformulated to mixed integer nonlinear programming (MINLP) through the use of orthogonal collocation on finite elements where binary variables are used to model the existence of regulatory interactions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Murray, Paul. "Extensions of the hit-or-miss transform for feature detection in noisy images and a novel design tool for estimating its parameters". Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=17198.

Texto completo da fonte
Resumo:
The work presented in this thesis focuses on extending a transform from Mathematical Morphology, known as the Hit-or-Miss transform (HMT), in order to make it more robust for detecting features of interest in the presence of noise in digital images. The extension that is described here requires that a single parameter is determined for correct functionality. A novel design tool which allows this parameter to be accurately estimated is proposed as part of this work. An efficient method for computing the extended transform is also presented. The HMT is a well known morphological transform that is capable of identifying features in digital images. When image features contain noise, texture or some other distortion, the HMT may fail. Various researchers have extended the HMT in different ways to make it more robust to noise. The most successful, and most recent extensions of the HMT for noise robustness, use rank order operators in place of standard morphological erosions and dilations. A major issue with most of these methods is that no technique is provided for calculating the parameters that are introduced to generalise the HMT, and, in most cases, these parameters are determined empirically. In this thesis, a new conceptual interpretation of the HMT is presented which uses percentage occupancy (PO) functions to implement the erosion and dilation operators of the HMT. When implemented in this way, the strictness of these PO functions can easily be relaxed in order to allow slacker fitting of the structuring elements. Relaxing the strict conditions of the transform is shown to improve the performance of the routine when processing noisy data. This thesis also introduces a novel design tool which is derived directly from the operators that are used to implement the aforementioned PO functions. This design tool can be used to determine a suitable value for the only parameter in the proposed extension of the HMT. Further, it can be used to estimate parameters for other generalisations of the HMT that have been described in the literature in order to improve their noise robustness. The power of the proposed technique is demonstrated and tested using sets of very noisy images. Further, a number of comparisons are performed in order to validate the method that is introduced in this work when compared with the most recent extensions of the HMT. One drawback with this method is that a direct implementation of the technique is computationally expensive. However, it is possible to implement the proposed method using rank-order filters in place of the percentage occupancy functions. Rank order filters are used in a multitude of image processing tasks. Their application can range from simple pre-processing tasks which aim to reduce/remove noise, to more complex problems where such filters can be used in combination to detect and segment image features. There is, therefore, a need to develop fast algorithms to compute the output of this class of filter in general. A number of methods for efficiently computing the output of specific rank order filters have been presented over the years. For example, numerous fast algorithms exist that can be used for calculating the output of the median filter. Fast algorithms for calculating morphological erosions and dilations - which, like the median filter, are a special case of the more general rank order filter - have also been proposed. In this thesis, these techniques are extended and combined such that it is possible to efficiently compute any rank, using any arbitrarily shaped window, making it possible to quickly compute the output of any rank order filter. The fast algorithm which is described is compared to an optimised technique for computing the output of this class of filter, and significant gains in speed are demonstrated when using the proposed technique. Further, it is shown that this efficient filtering algorithm can be used to produce an extremely fast implementation of the generalised HMT that is described in this work. The fast generalised HMT is compared with a number of other extensions and generalisations of the HMT that have been proposed in the literature over the years.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Sahin, Haci Bayram. "Analysing Design Parameters Of Hydroelectric Power Plant Projects To Develop Cost Decision Models By Using Regresion And Neural Network Tools". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611462/index.pdf.

Texto completo da fonte
Resumo:
Energy is increasingly becoming more important in today&rsquo
s world. Ascending of energy consumption due to development of technology and dense population of earth causes greenhouse effect. One of the most valuable energy sources is hydro energy. Because of limited energy sources and excessive energy usage, cost of energy is rising. There are many ways to generate electricity. Among the electricity generation units, hydroelectric power plants are very important, since they are renewable energy sources and they have no fuel cost. Electricity is one of the most expensive input in production. Every hydro energy potential should be considered when making investment on this hydro energy potential. To decide whether a hydroelectric power plant investment is feasible or not, project cost and amount of electricity generation of the investment should be precisely estimated. This study is about cost estimation of hydroelectric power plant projects. Many design parameters and complexity of construction affect the cost of hydroelectric power plant projects. In this thesis fifty four hydroelectric power plant projects are analyzed. The data set is analyzed by using regression analysis and artificial neural network tools. As a result, two cost estimation models have been developed to determine the hydroelectric power plant project cost in early stage of the project.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Guerrero, José-Luis. "Robust Water Balance Modeling with Uncertain Discharge and Precipitation Data : Computational Geometry as a New Tool". Doctoral thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-190686.

Texto completo da fonte
Resumo:
Models are important tools for understanding the hydrological processes that govern water transport in the landscape and for prediction at times and places where no observations are available. The degree of trust placed on models, however, should not exceed the quality of the data they are fed with. The overall aim of this thesis was to tune the modeling process to account for the uncertainty in the data, by identifying robust parameter values using methods from computational geometry. The methods were developed and tested on data from the Choluteca River basin in Honduras. Quality control of precipitation and discharge data resulted in a rejection of 22% percent of daily raingage data and the complete removal of one out of the seven discharge stations analyzed. The raingage network was not found sufficient to capture the spatial and temporal variability of precipitation in the Choluteca River basin. The temporal variability of discharge was evaluated through a Monte Carlo assessment of the rating-equation parameter values over a moving time window of stage-discharge measurements. Al hydrometric stations showed considerable temporal variability in the stage-discharge relationship, which was largest for low flows, albeit with no common trend. The problem with limited data quality was addressed by identifying robust model parameter values within the set of well-performing (behavioral) parameter-value vectors with computational-geometry methods. The hypothesis that geometrically deep parameter-value vectors within the behavioral set were hydrologically robust was tested, and verified, using two depth functions. Deep parameter-value vectors tended to perform better than shallow ones, were less sensitive to small changes in their values, and were better suited to temporal transfer. Depth functions rank multidimensional data. Methods to visualize the multivariate distribution of behavioral parameters based on the ranked values were developed. It was shown that, by projecting along a common dimension, the multivariate distribution of behavioral parameters for models of varying complexity could be compared using the proposed visualization tools. This has a potential to aid in the selection of an adequate model structure considering the uncertainty in the data. These methods allowed to quantify observational uncertainties. Geometric methods have only recently begun to be used in hydrology. It was shown that they can be used to identify robust parameter values, and some of their potential uses were highlighted.
Modeller är viktiga verktyg för att förstå de hydrologiska processer som bestämmer vattnets transport i landskapet och för prognoser för tider och platser där det saknas mätdata. Graden av tillit till modeller bör emellertid inte överstiga kvaliteten på de data som de matas med. Det övergripande syftet med denna avhandling var att anpassa modelleringsprocessen så att den tar hänsyn till osäkerheten i data och identifierar robusta parametervärden med hjälp av metoder från beräkningsgeometrin. Metoderna var utvecklade och testades på data från Cholutecaflodens avrinningsområde i Honduras. Kvalitetskontrollen i nederbörds- och vattenföringsdata resulterade i att 22 % av de dagliga nederbördsobservationerna måste kasseras liksom alla data från en av sju analyserade vattenföringsstationer. Observationsnätet för nederbörd befanns otillräckligt för att fånga upp den rumsliga och tidsmässiga variabiliteten i den övre delen av Cholutecaflodens avrinningsområde. Vattenföringens tidsvariation utvärderades med en Monte Carlo-skattning av värdet på parametrarna i avbördningskurvan i ett rörligt tidsfönster av vattenföringsmätningar. Alla vattenföringsstationer uppvisade stor tidsvariation i avbördningskurvan som var störst för låga flöden, dock inte med någon gemensam trend. Problemet med den måttliga datakvaliteten bedömdes med hjälp av robusta modellparametervärden som identifierades med hjälp av beräkningsgeometriska metoder. Hypotesen att djupa parametervärdesuppsättningar var robusta testades och verifierades genom två djupfunktioner. Geometriskt djupa parametervärdesuppsättningar verkade ge bättre hydrologiska resultat än ytliga, var mindre känsliga för små ändringar i parametervärden och var bättre lämpade för förflyttning i tiden. Metoder utvecklades för att visualisera multivariata fördelningar av välpresterande parametrar baserade på de rangordnade värdena. Genom att projicera längs en gemensam dimension, kunde multivariata fördelningar av välpresterande parametrar hos modeller med varierande komplexitet jämföras med hjälp av det föreslagna visualiseringsverktyget. Det har alltså potentialen att bistå vid valet av en adekvat modellstruktur som tar hänsyn till osäkerheten i data. Dessa metoder möjliggjorde kvantifiering av observationsosäkerheter. Geometriska metoder har helt nyligen börjat användas inom hydrologin. I studien demonstrerades att de kan användas för att identifiera robusta parametervärdesuppsättningar och några av metodernas potentiella användningsområden belystes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Sowgath, Md Tanvir. "Neural network based hybrid modelling and MINLP based optimisation of MSF desalination process within gPROMS : development of neural network based correlations for estimating temperature elevation due to salinity, hybrid modelling and MINLP based optimisation of design and operation parameters of MSF desalination process within gPROMS". Thesis, University of Bradford, 2007. http://hdl.handle.net/10454/10998.

Texto completo da fonte
Resumo:
Desalination technology provides fresh water to the arid regions around the world. Multi-Stage Flash (MSF) distillation process has been used for many years and is now the largest sector in the desalination industry. Top Brine Temperature (TBT) (boiling point temperature of the feed seawater in the first stage of the process) is one of the many important parameters that affect optimal design and operation of MSF processes. For a given pressure, TBT is a function of Boiling Point Temperature (BPT) at zero salinity and Temperature Elevation (TE) due to salinity. Modelling plays an important role in simulation, optimisation and control of MSF processes and within the model, calculation of TE is therefore important for each stages (including the first stage, which determines the TBT). Firstly, in this work, several Neural Network (NN) based correlations for predicting TE are developed. It is found that the NN based correlations can predict the experimental TE very closely. Also predictions of TE by the NN based correlations were found to be good when compared to those obtained using the existing correlations from the literature. Secondly, a hybrid steady state MSF process model is developed using gPROMS modelling tool embedding the NN based correlation. gPROMS provides an easy and flexible platform to build a process flowsheet graphically. Here a Master Model connecting (automatically) the individual unit model (brine heater, stages, etc.) equations is developed which is used repeatedly during simulation and optimisation. The model is validated against published results. Seawater is the main source raw material for MSF processes and is subject to seasonal temperature variation. With fixed design the model is then used to study the effect of a number of parameters (e.g. seawater and steam temperature) on the freshwater production rate. It is observed that, the variation in the parameters affect the rate of production of fresh water. How the design and operation are to be adjusted to maintain a fixed demand of fresh water through out the year (with changing seawater temperature) is also investigated via repetitive simulation. Thirdly, with clear understanding of the interaction of design and operating parameters, simultaneous optimisation of design and operating parameters of MSF process is considered via the application MINLP technique within gPROMS. Two types of optimisation problems are considered: (a) For a fixed fresh water demand throughout the year, the external heat input (a measure of operating cost) to the process is minimised; (b) For different fresh water demand throughout the year and with seasonal variation of seawater temperature, the total annualised cost of desalination is minimised. It is found that seasonal variation in seawater temperature results in significant variation in design and some of the operating parameters but with minimum variation in process temperatures. The results also reveal the possibility of designing stand-alone flash stages which would offer flexible scheduling in terms of the connection of various units (to build up the process) and efficient maintenance of the units throughout the year as the weather condition changes. In addition, operation at low temperatures throughout the year will reduce design and operating costs in terms of low temperature materials of construction and reduced amount of anti-scaling and anti-corrosion agents. Finally, an attempt was made to develop a hybrid dynamic MSF process model incorporating NN based correlation for TE. The model was validated at steady state condition using the data from the literature. Dynamic simulation with step changes in seawater and steam temperature was carried out to match the predictions by the steady state model. Dynamic optimisation problem is then formulated for the MSF process, subjected to seawater temperature change (up and down) over a period of six hours, to maximise a performance ratio by optimising the brine heater steam temperature while maintaining a fixed water demand.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Beek, Jaap van de. "Estimation of synchronization parameters". Licentiate thesis, Luleå tekniska universitet, Signaler och system, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16971.

Texto completo da fonte
Resumo:
This thesis deals with the estimation of synchronization parameters in {Orthogonal Frequency Division Multiplexing} (OFDM) communication systems and in active ultrasonic measuring systems. Estimation methods for the timing and frequency offset and for the attenuation taps of the frequency selective channel are presented and investigated.In OFDM communication systems the estimation of the timing offset of the transmitted data frame is one important parameter. This offset provides the receiver with a means of synchronizing its sampling clock to that of the transmitter. A second important parameter is the offset in the carrier frequency used by the receiver to demodulate the received signal.For OFDM systems using a cyclic prefix, the joint {Maximum Likelihood} (ML) estimation of the timing and carrier frequency offset is introduced. The redundancy introduced by the prefix is exploited optimally. This novel method is derived for a non-dispersive channel. Its performance, however, is also evaluated for a frequency-selective Rayleigh-fading radio channel. Time dispersion causes an irreducible error floor in this estimator's performance. This error floor is the limiting factor for the applicability of the timing estimator. Depending on the requirements, it may be used in either an acquisition or a tracking mode. For the frequency estimator the error floor is low enough to allow for stable frequency tracking.A low-complex variant of the timing offset estimator is presented allowing a simple implementation. This is the ML estimator, given a 2-bit representation of the received signal as the sufficient statistics. Its performance is evaluated for a frequency-selective Rayleigh-fading radio channel and for a twisted-pair copper channel. Simulations show this estimator to have a similar error floor as the full resolution ML estimator.The problem of estimating the propagation time of a signal is also of interest in active pulse echo systems, such as are used in, {\it e.g.}, radar, medical imaging, and geophysics. The {Minimum Mean Squared Error} (MMSE) estimator of arrival time is derived and investigated for an active airborne ultrasound measurement system. Besides performing better than the conventional {\it Maximum a Posteriori} (MAP) estimator, this method can be used to develop different estimators in situations where the system Signal to Noise Ratio (SNR) is unknown.Coherent multi-amplitude OFDM receivers generally need to compensate for a frequency selective channel in order to detect transmitted data symbols reliably. For this purpose, a channel equalizer needs to be fed estimates of the subchannel attenuations.The linear MMSE estimator of these attenuations is presented. Of all linear estimators, this estimator optimally makes use of the frequency correlation between the subchannel attenuations. Low-complex modified estimators are proposed and investigated. The proposed modifications cause an irreducible error floor for this estimator's performance, but simulations show that for SNR values up to 20~dB, the improvement of a modified estimator compared to the Least Squares (LS) estimator is at least 3~dB.
Godkänd; 1996; 20080328 (ysko)
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Livros sobre o assunto "Estimation of parameters tool"

1

Marlow, A. R. Software performance estimation tool. Manchester: UMIST, 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Ullah, A. Nonparametric kernel estimation of econometric parameters. [Urbana, Ill.]: College of Commerce and Business Administration, University of Illinois at Urbana-Champaign, 1987.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

McCallum, Hamish. Population parameters: Estimation for ecological models. Oxford: Blackwell Science, 2000.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ildiz, Faith. Estimation of motion parameters from image sequences. Monterey, Calif: Naval Postgraduate School, 1991.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zakharova, A. I. Estimation of seismicity parameters using a computer. Rotterdam: Balkema, 1986.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hillegers, L. T. M. E. The estimation of parameters in functional relationship models. [Maastricht: L.T.M.E. Hillegers, 1986.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kurt, Hoffmann. Improved estimation of distribution parameters: Stein-type estimators. Stuttgart: B.G. Teubner, 1992.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Seber, G. A. F. The estimation of animal abundance and related parameters. 2a ed. London: Edward Arnold, 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Lind, Rick. Estimation of modal parameters using a wavelet-based approach. Edwards, Calif: National Aeronautics and Space Administration, Dryden Flight Research Center, 1997.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Palaszewski, Bo. On multiple test procedures for finding deviating parameters. Göteborg: University of Göteborg, 1993.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Capítulos de livros sobre o assunto "Estimation of parameters tool"

1

Ghosal, Sarbajit, Narasimha Acharya, T. Eric Abrahamson, La Moyne Porter e Hubert W. Schreier. "An Integrated Tool for Estimation of Material Model Parameters". In Application of Imaging Techniques to Mechanics of Materials and Structures, Volume 4, 89–98. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-9796-8_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Luo, Ruiyan, Alejandra D. Herrera-Reyes, Yena Kim, Susan Rogowski, Diana White e Alexandra Smirnova. "Estimation of Time-Dependent Transmission Rate for COVID-19 SVIRD Model Using Predictor–Corrector Algorithm". In Mathematical Modeling for Women’s Health, 213–37. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58516-6_7.

Texto completo da fonte
Resumo:
AbstractStable parameter estimation is an ongoing challenge within biomathematics, especially in epidemiology. Oftentimes epidemiological models are composed of large numbers of equations and parameters. Due to high dimensionality, classic parameter estimation approaches, such as least square fitting, are computationally expensive. Additionally, the presence of observational noise and reporting errors that accompany real-time data can make these parameter estimation problems ill-posed and unstable. The recent COVID-19 pandemic highlighted the need for efficient parameter estimation tools. In this chapter, we develop a modified version of a regularized predictor–corrector algorithm aimed at stable low-cost reconstruction of infectious disease parameters. This method is applied to a new compartmental model describing COVID-19 dynamics, which accounts for vaccination and immunity loss (from vaccinated and recovered populations). Numerical simulations are carried out with synthetic and real data for COVID-19 pandemic. Based on the reconstructed disease transmission rates (and known mitigation measures), observations on historical trends of COVID-19 in the states of Georgia and California are presented. Such observations can be used to provide insights into future COVID policies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Möller, Dietmar P. F. "Parameter estimation: an advanced simulation tool in biomedicine". In Advanced Simulation in Biomedicine, 71–82. New York, NY: Springer New York, 1990. http://dx.doi.org/10.1007/978-1-4419-8614-6_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Lastovetsky, Alexey, Vladimir Rychkov e Maureen O’Flynn. "A Software Tool for Accurate Estimation of Parameters of Heterogeneous Communication Models". In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 43–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87475-1_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Van Aert, Sandra. "Statistical Parameter Estimation Theory - A Tool for Quantitative Electron Microscopy". In Handbook of Nanoscopy, 281–308. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA, 2012. http://dx.doi.org/10.1002/9783527641864.ch8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Okamura, Hiroyuki, e Tadashi Dohi. "mapfit: An R-Based Tool for PH/MAP Parameter Estimation". In Quantitative Evaluation of Systems, 105–12. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22264-6_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Navarro, Danielle, e David Foxcroft. "8. Estimating unknown quantities from a sample". In Learning Statistics with jamovi, 139–64. Cambridge, UK: Open Book Publishers, 2025. https://doi.org/10.11647/obp.0333.08.

Texto completo da fonte
Resumo:
This chapter delves into the distinction between descriptive and inferential statistics, focusing on the latter’s aim of deriving knowledge about unknown population parameters from observed data. It introduces estimation theory, the first of two primary pillars of inferential statistics, following a foundational discussion on sampling theory. The chapter begins by exploring key concepts such as populations, samples, and the importance of sampling methods, distinguishing between random and biased sampling. Practical sampling methods such as simple random sampling, stratified sampling, snowball sampling, and convenience sampling are examined, with emphasis on their implications for statistical inference. The chapter underscores the criticality of understanding these concepts for designing studies and making valid inferences, noting that in real-world research, truly random samples are rare. Building on this, the chapter explains the mathematical underpinnings of estimation through the law of large numbers and the central limit theorem. These principles demonstrate how sample statistics approximate population parameters as sample sizes increase, and why the sampling distribution of the mean becomes normal irrespective of the population’s initial distribution. The chapter also discusses practical techniques for estimating population means, variances, and standard deviations, noting common biases and how to adjust for them. Finally, it introduces confidence intervals as a way to express the uncertainty associated with parameter estimates, distinguishing between frequentist and Bayesian interpretations. The content provides a robust framework for understanding how statisticians use data to estimate population characteristics and addresses foundational tools essential for applied research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Krötz, Christian Alan, Max Feldman, Gustavo Pedroso Cainelli, Gustavo Künzel, Carlos Eduardo Pereira e Ivan Müller. "Tool and Method for Obtaining End-to-End Reliability Parameters of Wireless Industrial Networks". In Analysis, Estimations, and Applications of Embedded Systems, 77–88. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26500-6_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Santarelli, Maria Filomena, Vincenzo Positano e Luigi Landini. "Dynamic PET Data Generation and Analysis Software Tool for Evaluating the SNR Dependence on Kinetic Parameters Estimation". In IFMBE Proceedings, 204–7. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11128-5_51.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Funabiki, Nobuo, Chihiro Taniguchi, Kyaw Soe Lwin, Khin Khin Zaw e Wen-Chung Kao. "A Parameter Optimization Tool and Its Application to Throughput Estimation Model for Wireless LAN". In Advances in Intelligent Systems and Computing, 701–10. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61566-0_65.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Estimation of parameters tool"

1

Suchorucov, U. N., e A. G. Derevianchenco. "Intelligent Tool Wear Estimation for Precision Cutting Tools". In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-1153.

Texto completo da fonte
Resumo:
Abstract This paper presents a new algorithm for the direct inspection of the cutting tool wear. The place of the direct inspection is discussed. It is shown hat direct inspection does not require to establish experimentally a number of correlations amongst the cutting process parameters for each particular cutting regime, the tool and the workpiece involved. The main difficulty associated with the direct methods is the graphical representation of the wear topology using the results of the measurements. The basic principles associated with such representation are discussed. The cutting tool is represented by a set of initial geometrical parameters which constitutes its initial shape. Then, this set is used for the comparison with that of the partially worn tool to distinguish the wear topology. A number of computer program for the built-in computers of CNC machine tools are developed on the base of the proposed approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zhong Xionghu, Song Shubiao e Pei Chengming. "Time-varying Parameters Estimation based on Kalman Particle Filter with Forgetting Factors". In EUROCON 2005 - The International Conference on "Computer as a Tool". IEEE, 2005. http://dx.doi.org/10.1109/eurcon.2005.1630264.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ulsoy, A. Galip. "Estimation of Time Varying Parameters in Discrete Time Dynamic Systems: A Tool Wear Estimation Example". In 1989 American Control Conference. IEEE, 1989. http://dx.doi.org/10.23919/acc.1989.4790168.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sudev, L. J., e H. V. Ravindra. "Tool Wear Estimation in Drilling Using Acoustic Emission Signal by Multiple Regression and GMDH". In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-66756.

Texto completo da fonte
Resumo:
The cutting tool is the only element in a machine tool that requires frequent changes due to failure. Drill bit wear can cause catastrophic failure that can result in considerable damage to the work piece and the machine tool. Hence, there is an imperative need to keep a watch on the condition of the cutting tools during the machining process. Over the years, a wide variety of on-line or off-line techniques have been investigated for monitoring abnormal cutting tools. A variety of signals such as tool-tip temperature, forces, power, thrust, torque, vibrations, shock pulse, Acoustic Emission (AE) etc., have been used for monitoring tool failure by on-line technique. The detection and monitoring of AE is commonly used to predict tool failure. Present work involves estimation tool flank wear in drilling based on AE parameters viz., RMS, energy, signal strength, count and frequency by empirical methods of analysis like Multiple Regression Analysis and Group Method of Data Handling (GMDH). The experimental work consisted of drilling S.G Cast iron using high-speed steel drill bit and measuring AE parameters from the workpiece using AE measuring system for different cutting conditions. Machining was stopped at regular intervals of time and tool flank wear was measured by Toolmakers microscope. The experimental data were subjected to simpler methods of analysis to obtain a clear insight of the signals involved. The study of AE-time plots showed a similarity with three phases of tool wear, which implies that the measured AE parameters can be related to tool wear. Multiple Regression Analysis and Drilling is a major material removal process in manufacturing. Infact, the drills have been used widely in industry since the industrial revolution. It was estimated that 40% of all the metal removal operations in the aerospace industry is by drilling. Similar to the other cutting tools, after a certain limit, drill wear can cause catastrophic failure that can result in considerable damage to the work piece even to the machine tool [1]. GMDH methods were successful in estimating flank wear based on measured AE parameters. By Multiple Regression Analysis better estimation was obtained at lower cutting conditions. Three criterion functions of GMDH viz., Regularity, Unbiased and Combined were used for estimation with 50%, 62.5% and 75% of data in the training set. Estimation was done upto Level-4. The results of GMDH estimation showed that regularity criterion functions correlates well for the set of input variables compared with unbiased and combined criteria and least error of estimation was found when 75% of data was used in the training set. The optimum level of estimation increased with the increase in the percentage of data in the training set. Comparison of the performance of Multiple Regression Analysis and GMDH indicated that estimation by regularity criterion of GMDH had an edge over Multiple Regression Analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kozlov, E. A., e D. Y. Varivoda. "Dense 3D Residual Moveout Analysis as a Tool for HTI Parameters Estimation". In 65th EAGE Conference & Exhibition. European Association of Geoscientists & Engineers, 2003. http://dx.doi.org/10.3997/2214-4609-pdb.6.p086.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Celeska, Maja, Krste Najdenkoski, Vlatko Stoilkov, Aneta Buchkovska, Zhivko Kokolanski e Vladimir Dimchev. "Estimation of Weibull parameters from wind measurement data by comparison of statistical methods". In IEEE EUROCON 2015 - International Conference on Computer as a Tool (EUROCON). IEEE, 2015. http://dx.doi.org/10.1109/eurocon.2015.7313684.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Xu, Xiaopeng, Xiaochun Zhang e Hongji Yang. "A Probability Parameter Estimation Tool in C++". In 2022 9th International Conference on Dependable Systems and Their Applications (DSA). IEEE, 2022. http://dx.doi.org/10.1109/dsa56465.2022.00075.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Korzun, Alexander, Evgeni Kukareko e Anatoly Pashkevich. "Estimation of robot parameters using optical sensors". In Optical Tools for Manufacturing and Advanced Automation, editado por David P. Casasent. SPIE, 1993. http://dx.doi.org/10.1117/12.150211.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Bin Li e Hailong Lu. "Methods of reliability estimation for numerical control machine tool based on performance parameters". In 2011 Second International Conference on Mechanic Automation and Control Engineering (MACE). IEEE, 2011. http://dx.doi.org/10.1109/mace.2011.5987916.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Chethan, Y. D., Ravindra Holalu Venkatadas e Y. T. Krishne Gowda. "Estimation of Machine Vision and Acoustic Emission Parameters for Tool Status Monitoring in Turning Using Artificial Neural Network". In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-50445.

Texto completo da fonte
Resumo:
Tool status monitoring is a fundamental aspect in the evolution of production techniques. As the quality of the cutting tool is directly related to the quality of the product, the level of tool status should be kept under control during machining operations. An attempt is made here to extract maximum information from image captured from machine vision and Acoustic Emission (AE) signals acquired during turning of Inconel 718 nickel alloy. Nickel-base super alloy Inconel 718 is a high-strength, thermal-resistant. Because of its excellent mechanical properties, it plays an important part in recent years in aerospace, petroleum and nuclear energy industries. Due to the extreme toughness and work hardening characteristic of the alloy, the problem of machining Inconel 718 is one of ever-increasing magnitude. The experiments were conducted for different cutting speed and feed combinations. An image processing method, the blob analysis technique, was used to extract parameters called features representing the state of the cutting tool. Area and perimeter of the machine vision, AE RMS and AE COUNT of the AE signals studied as features and found to be effective in tool condition monitoring. Once all these features are extracted after preliminary processing of image and AE signals, tool Status, whether worn out or not worn out (serviceable), is decided on the basis of extracted features. In this study, theoretical estimation using ANN is carried out for machine vision parameters like Wear area and perimeter Acoustic Emission parameters like AE RMS and AE COUNT. In estimating vision parameter i.e. Wear area: perimeter, machining time, AE RMS, AE COUNT are considered as the independent variables and vice versa in order to have the performance well in multi sensory situations. In order to identify the tool status based on the signal measured, an Artificial Neural Network, using a Feed Forward Back-Propagation algorithm, has been adopted. The input parameters that are being used for estimation in this study were found to be non linearly varying with the desired output. The training and estimation has generated closer outputs as compared to the wear area observed from the machine vision approach and AE RMS from the acoustic emission approach. Artificial neural network estimates have better correlation at higher feed rate. Under these conditions, there will be large scale values, resulting in vision and AE parameters. Due to higher values, correlation may have been better.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Estimation of parameters tool"

1

Wang, Yong-Yi. PR-350-154501-R01 Evaluation of Girth Weld Flaws in Vintage Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), junho de 2019. http://dx.doi.org/10.55274/r0011600.

Texto completo da fonte
Resumo:
Being able to estimate the tensile strain capacity (TSC) of vintage girth welds is sometimes necessary in the integrity management of vintage pipelines. For instance, assessing the girth weld integrity could be a top priority after a confirmed ground movement event. Decisions may also be needed about the disposition of a girth weld when weld anomalies are found. This project is aimed to develop a TSC estimation tool for vintage girth welds. The work includes two parts: (1) the development of a TSC estimation tool via numerical analysis and (2) the evaluation of the developed tool via experimental testing. This report covers both the development and evaluation of the TSC estimation tool. The tool was developed by taking the outcome of the case-specific TSC analysis using Level 4a procedures of the PRCI-CRES tensile strain models and considering large ranges of material and dimensional parameters. The curved wide plate (CWP) and accompanying small-scale tests were conducted to evaluate the tool. The applicability and limitations of the tool are covered in this report. The tool developed in this project has a user-friendly interface and an accompanying help manual. The tool takes user inputs, such as the geometry and material properties of pipe and weld, flaw dimensions, and pipeline pressure, and provides an estimated TSC. For the inputs that might not have readily available values, recommended values are provided. This tool allows the evaluation of the impact of various input parameters on TSC. The ability to estimate the TSC enables operators to assess the integrity of vintage girth welds, thus facilitating the prioritization of maintenance activities and reducing unnecessary remediation work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bajwa, Abdullah, Tim Kroeger e Timothy Jacobs. PR-457-17201-R04 Residual Gas Fraction Estimation Based on Measured Engine Parameters - Phase IV. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), setembro de 2021. http://dx.doi.org/10.55274/r0012176.

Texto completo da fonte
Resumo:
Based on experimental characterization of the scavenging behavior of a cross-scavenged, piston-aspirated, two-stroke, natural gas engine in phase III of the current project, a computationally inexpensive simple scavenging model was improved in this phase. Experimental results using fast nondispersive infrared (NDIR) CO2 measurements from the cylinder and the exhaust, as well as experiments using unburned fuel pre-mixed in the scavenging chamber as a tracer for short-circuiting during scavenging, were used in this phase to validate the improved model. The model represents the fundamental phenomenological characteristics revealed by those experiments. The experiments and literature show that scavenging takes place by the following phenomena: blowdown, displacement of residuals by incoming air, mixing of residuals and air, and short-circuiting of fresh air. To reflect this, the improved hybrid model features the following: isentropic blowdown, non-isothermal perfect displacement, non-isothermal perfect mixing, and a concurrent direct short-circuiting of air (unmixed with residuals). The validated improved hybrid model rectified the primary shortcoming of the phase III model. By adding the discrete short-circuiting zone, trapped mass could be modeled at both medium and high crankshaft speeds, whereas the phase III model could not capture the full scope of scavenging inefficiencies at medium speed using its perfect mixing stage alone. Furthermore, using the hybrid model to predict NOx using an exponential NO and shy;x-TER curve fit revealed that the improved phase IV hybrid model predicts NOx approximately as well as the experimentally-calculated TER from the phase III experiments. Additionally, GT-Power, a 1D fluid dynamics and engine simulation software, was used to identify whether hybrid model tuning could be aided by relatively inexpensive 1D simulation rather than CFD or fast NDIR experiments. Using three-pressure analysis (with in-cylinder, exhaust, and scavenging chamber pressures as boundary conditions) and scavenging profiles derived from the hybrid model itself, GT-Power was shown to be a plausible tool for scavenging model tuning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bajwa, Abdullah, e Timothy Jacobs. PR-457-17201-R02 Residual Gas Fraction Estimation Based on Measured Engine Parameters. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), fevereiro de 2019. http://dx.doi.org/10.55274/r0011558.

Texto completo da fonte
Resumo:
Gas exchange processes in two-stroke internal combustion engines, commonly referred to as scavenging, are responsible for removing the exhaust gases in the combustion chamber and preparing the combustible fuel-oxidizer mixture that undergoes combustion and converts the chemical energy of the fuel into mechanical work. Scavenging is a complicated phenomenon because of the simultaneous introduction of fresh gases into the engine cylinder through the intake ports, and the expulsion of combustion products from the previous cycles through the exhaust ports. A non-negligible fraction of the gaseous mixture that is trapped in the cylinder at the conclusion of scavenging is composed of residual gases from the previous cycle. This can cause significant changes to the combustion characteristics of the mixture by changing its composition and temperature, i.e. its thermodynamic state. Thus, it is vital to have accurate knowledge of the thermodynamic state of the post-scavenging mixture to be able to reliably predict and control engine performance, efficiency and emissions. Two tools for estimating the trapped mixture state - a simple scavenging model and empirical correlations - were developed in this study. Unfortunately, it is not practical to directly measure the trapped residual fraction for engines operating in the field. To overcome this handicap, simple scavenging models or correlations, which estimate this fraction based on some economically measurable engine parameters, can be developed. This report summarizes the results of event-II of a multi-event project that aims to develop such mathematical formulations for stationary two-stroke natural gas engines using data from more advanced models and experimentation. In this event, results from a GT-Power based model for an Ajax E-565 single-cylinder engine are used to develop a three-event single zone scavenging model and empirical correlations. Both of these mathematical devices produce accurate estimates of the trapped mixture state. The estimates are compared to GT-Power results. In the next event of the project, these results will be validated using experimental data. Various steps followed in the development of the model have been discussed in this report, and at the end some results and recommendations for the next event of the project have been presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Cattaneo, Matias D., Richard K. Crump, Max H. Farrell e Yingjie Feng. Nonlinear Binscatter Methods. Federal Reserve Bank of New York, agosto de 2024. http://dx.doi.org/10.59576/sr.1110.

Texto completo da fonte
Resumo:
Binned scatter plots are a powerful statistical tool for empirical work in the social, behavioral, and biomedical sciences. Available methods rely on a quantile-based partitioning estimator of the conditional mean regression function to primarily construct flexible yet interpretable visualization methods, but they can also be used to estimate treatment effects, assess uncertainty, and test substantive domain-specific hypotheses. This paper introduces novel binscatter methods based on nonlinear, possibly nonsmooth M-estimation methods, covering generalized linear, robust, and quantile regression models. We provide a host of theoretical results and practical tools for local constant estimation along with piecewise polynomial and spline approximations, including (i) optimal tuning parameter (number of bins) selection, (ii) confidence bands, and (iii) formal statistical tests regarding functional form or shape restrictions. Our main results rely on novel strong approximations for general partitioning-based estimators covering random, data-driven partitions, which may be of independent interest. We demonstrate our methods with an empirical application studying the relation between the percentage of individuals without health insurance and per capita income at the zip-code level. We provide general-purpose software packages implementing our methods in Python, R, and Stata.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Bonfil, David J., Daniel S. Long e Yafit Cohen. Remote Sensing of Crop Physiological Parameters for Improved Nitrogen Management in Semi-Arid Wheat Production Systems. United States Department of Agriculture, janeiro de 2008. http://dx.doi.org/10.32747/2008.7696531.bard.

Texto completo da fonte
Resumo:
To reduce financial risk and N losses to the environment, fertilization methods are needed that improve NUE and increase the quality of wheat. In the literature, ample attention is given to grid-based and zone-based soil testing to determine the soil N available early in the growing season. Plus, information is available on in-season N topdressing applications as a means of improving GPC. However, the vast majority of research has focused on wheat that is grown under N limiting conditions in sub-humid regions and irrigated fields. Less attention has been given to wheat in dryland that is water limited. The objectives of this study were to: (1) determine accuracy in determining GPC of HRSW in Israel and SWWW in Oregon using on-combine optical sensors under field conditions; (2) develop a quantitative relationship between image spectral reflectance and effective crop physiological parameters; (3) develop an operational precision N management procedure that combines variable-rate N recommendations at planting as derived from maps of grain yield, GPC, and test weight; and at mid-season as derived from quantitative relationships, remote sensing, and the DSS; and (4) address the economic and technology-transfer aspects of producers’ needs. Results from the research suggest that optical sensing and the DSS can be used for estimating the N status of dryland wheat and deciding whether additional N is needed to improve GPC. Significant findings include: 1. In-line NIR reflectance spectroscopy can be used to rapidly and accurately (SEP <5.0 mg g⁻¹) measure GPC of a grain stream conveyed by an auger. 2. On-combine NIR spectroscopy can be used to accurately estimate (R² < 0.88) grain test weight across fields. 3. Precision N management based on N removal increases GPC, grain yield, and profitability in rainfed wheat. 4. Hyperspectral SI and partial least squares (PLS) models have excellent potential for estimation of biomass, and water and N contents of wheat. 5. A novel heading index can be used to monitor spike emergence of wheat with classification accuracy between 53 and 83%. 6. Index MCARI/MTVI2 promises to improve remote sensing of wheat N status where water- not soil N fertility, is the main driver of plant growth. Important features include: (a) computable from commercial aerospace imagery that include the red edge waveband, (b) sensitive to Chl and resistant to variation in crop biomass, and (c) accommodates variation in soil reflectance. Findings #1 and #2 above enable growers to further implement an efficient, low cost PNM approach using commercially available on-combine optical sensors. Finding #3 suggests that profit opportunities may exist from PNM based on information from on-combine sensing and aerospace remote sensing. Finding #4, with its emphasis on data retrieval and accuracy, enhances the potential usefulness of a DSS as a tool for field crop management. Finding #5 enables land managers to use a DSS to ascertain at mid-season whether a wheat crop should be harvested for grain or forage. Finding #6a expands potential commercial opportunities of MS imagery and thus has special importance to a majority of aerospace imaging firms specializing in the acquisition and utilization of these data. Finding #6b on index MCARI/MVTI2 has great potential to expand use of ground-based sensing and in-season N management to millions of hectares of land in semiarid environments where water- not N, is the main determinant of grain yield. Finding #6c demonstrates that MCARI/MTVI2 may alleviate the requirement of multiple N-rich reference strips to account for soil differences within farm fields. This simplicity will be less demanding of grower resources, promising substantially greater acceptance of sensing technologies for in-season N management.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Hertel, Thomas, David Hummels, Maros Ivanic e Roman Keeney. How Confident Can We Be in CGE-Based Assessments of Free Trade Agreements? GTAP Working Paper, junho de 2003. http://dx.doi.org/10.21642/gtap.wp26.

Texto completo da fonte
Resumo:
With the proliferation of Free Trade Agreements (FTAs) over the past decade, demand for quantitative analysis of their likely impacts has surged. The main quantitative tool for performing such analysis is Computable General Equilibrium (CGE) modeling. Yet these models have been widely criticized for performing poorly (Kehoe, 2002) and having weak econometric foundations (McKitrick, 1998; Jorgenson, 1984). FTA results have been shown to be particularly sensitive to the trade elasticities, with small trade elasticities generating large terms of trade effects and relatively modest efficiency gains, whereas large trade elasticities lead to the opposite result. Critics are understandably wary of results being determined largely by the authors’ choice of trade elasticities. Where do these trade elasticities come from? CGE modelers typically draw these elasticities from econometric work that uses time series price variation to identify an elasticity of substitution between domestic goods and composite imports (Alaouze, 1977; Alaouze, et al., 1977; Stern et al., 1976; Gallaway, McDaniel and Rivera, 2003). This approach has three problems: the use of point estimates as “truth”, the magnitude of the point estimates, and estimating the relevant elasticity. First, modelers take point estimates drawn from the econometric literature, while ignoring the precision of these estimates. As we will make clear below, the confidence one has in various CGE conclusions depends critically on the size of the confidence interval around parameter estimates. Standard “robustness checks” such as systematically raising or lowering the substitution parameters does not properly address this problem because it ignores information about which parameters we know with some precision and which we do not. A second problem with most existing studies derives from the use of import price series to identify home vs. foreign substitution, for example, tends to systematically understate the true elasticity. This is because these estimates take price variation as exogenous when estimating the import demand functions, and ignore quality variation. When quality is high, import demand and prices will be jointly high. This biases estimated elasticities toward zero. A related point is that the fixed-weight import price series used by most authors are theoretically inappropriate for estimating the elasticities of interest. CGE modelers generally examine a nested utility structure, with domestic production substitution for a CES composite import bundle. The appropriate price series is then the corresponding CES price index among foreign varieties. Constructing such an index requires knowledge of the elasticity of substitution among foreign varieties (see below). By using a fixed-weight import price series, previous estimates place too much weight on high foreign prices, and too small a weight on low foreign prices. In other words, they overstate the degree of price variation that exists, relative to a CES price index. Reconciling small trade volume movements with large import price series movements requires a small elasticity of substitution. This problem, and that of unmeasured quality variation, helps explain why typical estimated elasticities are very small. The third problem with the existing literature is that estimates taken from other researchers’ studies typically employ different levels of aggregation, and exploit different sources of price variation, from what policy modelers have in mind. Employment of elasticities in experiments ill-matched to their original estimation can be problematic. For example, estimates may be calculated at a higher or lower level of aggregation than the level of analysis than the modeler wants to examine. Estimating substitutability across sources for paddy rice gives one a quite different answer than estimates that look at agriculture as a whole. When analyzing Free Trade Agreements, the principle policy experiment is a change in relative prices among foreign suppliers caused by lowering tariffs within the FTA. Understanding the substitution this will induce across those suppliers is critical to gauging the FTA’s real effects. Using home v. foreign elasticities rather than elasticities of substitution among imports supplied from different countries may be quite misleading. Moreover, these “sourcing” elasticities are critical for constructing composite import price series to appropriate estimate home v. foreign substitutability. In summary, the history of estimating the substitution elasticities governing trade flows in CGE models has been checkered at best. Clearly there is a need for improved econometric estimation of these trade elasticities that is well-integrated into the CGE modeling framework. This paper provides such estimation and integration, and has several significant merits. First, we choose our experiment carefully. Our CGE analysis focuses on the prospective Free Trade Agreement of the Americas (FTAA) currently under negotiation. This is one of the most important FTAs currently “in play” in international negotiations. It also fits nicely with the source data used to estimate the trade elasticities, which is largely based on imports into North and South America. Our assessment is done in a perfectly competitive, comparative static setting in order to emphasize the role of the trade elasticities in determining the conventional gains/losses from such an FTA. This type of model is still widely used by government agencies for the evaluation of such agreements. Extensions to incorporate imperfect competition are straightforward, but involve the introduction of additional parameters (markups, extent of unexploited scale economies) as well as structural assumptions (entry/no-entry, nature of inter-firm rivalry) that introduce further uncertainty. Since our focus is on the effects of a PTA we estimate elasticities of substitution across multiple foreign supply sources. We do not use cross-exporter variation in prices or tariffs alone. Exporter price series exhibit a high degree of multicolinearity, and in any case, would be subject to unmeasured quality variation as described previously. Similarly, tariff variation by itself is typically unhelpful because by their very nature, Most Favored Nation (MFN) tariffs are non-discriminatory in nature, affecting all suppliers in the same way. Tariff preferences, where they exist, are often difficult to measure – sometimes being confounded by quantitative barriers, restrictive rules of origin, and other restrictions. Instead we employ a unique methodology and data set drawing on not only tariffs, but also bilateral transportation costs for goods traded internationally (Hummels, 1999). Transportation costs vary much more widely than do tariffs, allowing much more precise estimation of the trade elasticities that are central to CGE analysis of FTAs. We have highly disaggregated commodity trade flow data, and are therefore able to provide estimates that precisely match the commodity aggregation scheme employed in the subsequent CGE model. We follow the GTAP Version 5.0 aggregation scheme which includes 42 merchandise trade commodities covering food products, natural resources and manufactured goods. With the exception of two primary commodities that are not traded, we are able to estimate trade elasticities for all merchandise commodities that are significantly different form zero at the 95% confidence level. Rather than producing point estimates of the resulting welfare, export and employment effects, we report confidence intervals instead. These are based on repeated solution of the model, drawing from a distribution of trade elasticity estimates constructed based on the econometrically estimated standard errors. There is now a long history of CGE studies based on SSA: Systematic Sensitivity Analysis (Harrison and Vinod, 1992; Wigle, 1991; Pagon and Shannon, 1987) Ho
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Bajwa, Abdullah, e Timothy Jacobs. PR-457-17201-R03 Residual Gas Fraction Estimation Based on Measured In-Cylinder Pressure - Phase III. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), janeiro de 2021. http://dx.doi.org/10.55274/r0011996.

Texto completo da fonte
Resumo:
An experimental study was carried out to characterize the scavenging behavior of a cross-scavenged, piston-aspirated, two-stroke, natural gas engine to aid in the development of computationally inexpensive simple scavenging models for onboard engine control by (1) studying the effects of changing operational parameters on the engine's scavenging performance, and (2) identifying the underlying phenomena driving the observed effects. Tracer based methods were used to quantify the scavenging and trapping performance of the engine - CO2 was used as a tracer for combustion products and pre-mixed fuel was used as a fresh charge tracer. CO2 concentration was measured on a crank angle resolved basis both in the engine cylinder and exhaust using portable NDIR sensors, while unburned fuel concentration was measured in the exhaust using the FID module of a standard five gas analyzer. It was found that scavenging took place in three stages, an initial perfect displacement type stage, followed by a short-circuiting, and a perfect mixing type stage. Engine speed and load changes were found to have the strongest effects on the trapping and scavenging performance of the engine; spark timing effects were less significant. Changes in measured scavenging and trapping efficiencies at different operating points resulted from a combination of influences, namely (1) reduced time for gas exchange at high speeds, (2) higher expansion and scavenging pressures at high loads and retarded spark timings, and (3) phasing of the reflected 'scavenging' and 'plugging' pulses in the exhaust pipe relative to BDC and EPC, respectively. Increasing engine load made the engine scavenge significantly better and increasing engine speed resulted in a larger fraction of the delivered air being trapped. The combined effect of these scavenging changes and changes in the engine's fuel conversion efficiency resulted in the engine running leaner at high speeds (more air delivered and higher trapping efficiency) and at low loads (higher trapped residuals). The results were then used to gauge the performance of the simple scavenging model (the hybrid model) developed in phase II of the project. While encouraging results were obtained at high speed, the trapped air mass was overestimated at medium speed; suggesting the need for adding a low scavenging efficiency sub-model. Recommendations have been made about adding a short-circuiting zone to address this limitation of the model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Evans, James, David Kretschmann e David Green. Procedures for estimation of Weibull parameters. Madison, WI: U.S. Department of Agriculture, Forest Service, Forest Products Laboratory, 2019. http://dx.doi.org/10.2737/fpl-gtr-264.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Saltus, Christina, Molly Reif e Richard Johansen. waterquality for ArcGIS Pro Toolbox. Engineer Research and Development Center (U.S.), outubro de 2021. http://dx.doi.org/10.21079/11681/42240.

Texto completo da fonte
Resumo:
Monitoring water quality of small inland lakes and reservoirs is a critical component of USACE water quality management plans. However, limited resources for traditional field-based monitoring of numerous lakes and reservoirs that cover vast geographic areas often leads to reactional responses to harmful algal bloom (HAB) outbreaks. Satellite remote sensing methodologies using HAB indicators is a good low-cost option to traditional methods and has been proven to maximize and complement current field-based approaches while providing a synoptic view of water quality (Beck et al. 2016; Beck et al. 2017; Beck et al. 2019; Johansen et al. 2019; Mishra et al. 2019; Stumpf and Tomlinson 2007; Wang et al. 2020; Xu et al. 2019; Reif 2011). To assist USACE water quality management, we developed an ESRI ArcGIS Pro desktop software toolbox (waterquality for ArcGIS Pro) that was founded on the design and research established in the waterquality R software package (Johansen et al. 2019; Johansen 2020). The toolbox enables the detection, monitoring, and quantification of HAB indicators (chlorophyll-a, phycocyanin, and turbidity) using Sentinel-2 satellite imagery. Four tools are available 1) to automate the download of Sentinel-2 Level-2A imagery, 2) to create stacked image with options for cloud and non-water features masks, 3) to apply water quality algorithms to generate relative estimations of one to three water quality parameters (chlorophyll-a, phycocyanin, and turbidity), and 4) to create linear regression graphs and statistics comparing in situ data (from field-based water sampling) to relative estimation data. This document serves as a user's guide for the waterquality for ArcGIS Pro toolbox and includes instructions on toolbox installation and descriptions of each tool's inputs, outputs, and troubleshooting guidance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Saltus, Christina, Molly Reif e Richard Johansen. waterquality for ArcGIS Pro Toolbox : user's guide. Engineer Research and Development Center (U.S.), setembro de 2022. http://dx.doi.org/10.21079/11681/45362.

Texto completo da fonte
Resumo:
Monitoring water quality of small inland lakes and reservoirs is a critical component of the US Army Corps of Engineers (USACE) water quality management plans. However, limited resources for traditional field-based monitoring of numerous lakes and reservoirs covering vast geographic areas often leads to reactional responses to harmful algal bloom (HAB) outbreaks. Satellite remote sensing methodologies using HAB indicators is a good low-cost option to traditional methods and has been proven to maximize and complement current field-based approaches while providing a synoptic view of water quality (Beck et al. 2016; Beck et al. 2017; Beck et al. 2019; Johansen et al. 2019; Mishra et al. 2019; Stumpf and Tomlinson 2007; Wang et al. 2020; Xu et al. 2019; Reif 2011). To assist USACE water quality management, we developed an Environmental Systems Research Institute (ESRI) ArcGIS Pro desktop software toolbox (waterquality for ArcGIS Pro) founded on the design and research established in the waterquality R software package (Johansen et al. 2019; Johansen 2020). The toolbox enables the detection, monitoring, and quantification of HAB indicators (chlorophyll-a, phycocyanin, and turbidity) using Sentinel-2 satellite imagery. Four tools are available: (1) automating the download of Sentinel-2 Level-2A imagery, (2) creating stacked image with options for cloud and non-water features masks, (3) applying water quality algorithms to generate relative estimations of one to three water quality parameters (chlorophyll-a, phycocyanin, and turbidity), and (4) creating linear regression graphs and statistics comparing in situ data (from field-based water sampling) to relative estimation data. This document serves as a user’s guide for the waterquality for ArcGIS Pro toolbox and includes instructions on toolbox installation and descriptions of each tool’s inputs, outputs, and troubleshooting guidance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia