Auswahl der wissenschaftlichen Literatur zum Thema „Hydrology Mathematical models Data processing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Hydrology Mathematical models Data processing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Hydrology Mathematical models Data processing"

1

Wu, Di, Si Jing Cai und Wen Xiao Wang. „Numerical Modeling and Solution of Ground Water Inrush at the Baixiangshan Iron Mine“. Advanced Materials Research 383-390 (November 2011): 2464–70. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.2464.

Der volle Inhalt der Quelle
Annotation:
The Baixiangshan iron mine is a large scale underground mine under construction, and will be put into production in 2011. As hydro-geological conditions of the mine are extremely complicated, ground water inrush happened twice during excavation of the ventilation shaft in 2006 and 2009 respectively. The purpose of this paper is to control the ground water inrush and guide the safety production of the mine. Hence, the ground water state was firstly analyzed through the data of borehole and hydrologic geology. After that, hydro-geological models of the ground water system were built up. According to the hydro-geological models, mathematical models of the ground water were established. With the help of the software of Processing Modflow for Windows (PMWIN), the numerical model of the ground water was set up. Then, by using the SOR (Successive Over-Relaxation) iterative method, the numerical equations were solved. Finally, the district ground water inflow rate on the main mining levels and the dangerous districts of the ground water inrush was figured out. The results of the numerical solution are significant to the control of ground water inrush in the Baixiangshan iron mine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sawada, Yohei, und Risa Hanazaki. „Socio-hydrological data assimilation: analyzing human–flood interactions by model–data integration“. Hydrology and Earth System Sciences 24, Nr. 10 (05.10.2020): 4777–91. http://dx.doi.org/10.5194/hess-24-4777-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. In socio-hydrology, human–water interactions are simulated by mathematical models. Although the integration of these socio-hydrological models and observation data is necessary for improving the understanding of human–water interactions, the methodological development of the model–data integration in socio-hydrology is in its infancy. Here we propose applying sequential data assimilation, which has been widely used in geoscience, to a socio-hydrological model. We developed particle filtering for a widely adopted flood risk model and performed an idealized observation system simulation experiment and a real data experiment to demonstrate the potential of the sequential data assimilation in socio-hydrology. In these experiments, the flood risk model's parameters, the input forcing data, and empirical social data were assumed to be somewhat imperfect. We tested if data assimilation can contribute to accurately reconstructing the historical human–flood interactions by integrating these imperfect models and imperfect and sparsely distributed data. Our results highlight that it is important to sequentially constrain both state variables and parameters when the input forcing is uncertain. Our proposed method can accurately estimate the model's unknown parameters – even if the true model parameter temporally varies. The small amount of empirical data can significantly improve the simulation skill of the flood risk model. Therefore, sequential data assimilation is useful for reconstructing historical socio-hydrological processes by the synergistic effect of models and data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Thorndahl, Søren, Thomas Einfalt, Patrick Willems, Jesper Ellerbæk Nielsen, Marie-Claire ten Veldhuis, Karsten Arnbjerg-Nielsen, Michael R. Rasmussen und Peter Molnar. „Weather radar rainfall data in urban hydrology“. Hydrology and Earth System Sciences 21, Nr. 3 (07.03.2017): 1359–80. http://dx.doi.org/10.5194/hess-21-1359-2017.

Der volle Inhalt der Quelle
Annotation:
Abstract. Application of weather radar data in urban hydrological applications has evolved significantly during the past decade as an alternative to traditional rainfall observations with rain gauges. Advances in radar hardware, data processing, numerical models, and emerging fields within urban hydrology necessitate an updated review of the state of the art in such radar rainfall data and applications. Three key areas with significant advances over the past decade have been identified: (1) temporal and spatial resolution of rainfall data required for different types of hydrological applications, (2) rainfall estimation, radar data adjustment and data quality, and (3) nowcasting of radar rainfall and real-time applications. Based on these three fields of research, the paper provides recommendations based on an updated overview of shortcomings, gains, and novel developments in relation to urban hydrological applications. The paper also reviews how the focus in urban hydrology research has shifted over the last decade to fields such as climate change impacts, resilience of urban areas to hydrological extremes, and online prediction/warning systems. It is discussed how radar rainfall data can add value to the aforementioned emerging fields in current and future applications, but also to the analysis of integrated water systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Unnikrishnan, Poornima, und V. Jothiprakash. „Data-driven multi-time-step ahead daily rainfall forecasting using singular spectrum analysis-based data pre-processing“. Journal of Hydroinformatics 20, Nr. 3 (09.08.2017): 645–67. http://dx.doi.org/10.2166/hydro.2017.029.

Der volle Inhalt der Quelle
Annotation:
Abstract Accurate forecasting of rainfall, especially daily time-step rainfall, remains a challenging task for hydrologists' invariance with the existence of several deterministic, stochastic and data-driven models. Several researchers have fine-tuned the hydrological models by using pre-processed input data but improvement rate in prediction of daily time-step rainfall data is not up to the expected level. There are still chances to improve the accuracy of rainfall predictions with an efficient data pre-processing algorithm. Singular spectrum analysis (SSA) is one such technique found to be a very successful data pre-processing algorithm. In the past, the artificial neural network (ANN) model emerged as one of the most successful data-driven techniques in hydrology because of its ability to capture non-linearity and a wide variety of algorithms. This study aims at assessing the advantage of using SSA as a pre-processing algorithm in ANN models. It also compares the performance of a simple ANN model with SSA-ANN model in forecasting single time-step as well as multi-time-step (3-day and 7-day) ahead daily rainfall time series pertaining to Koyna watershed, India. The model performance measures show that data pre-processing using SSA has enhanced the performance of ANN models both in single as well as multi-time-step ahead daily rainfall prediction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Taysaev, K. K., und L. G. Petrova. „Mathematical Models Analysis of Combined Processing Methods of Parts“. Materials Science Forum 992 (Mai 2020): 901–6. http://dx.doi.org/10.4028/www.scientific.net/msf.992.901.

Der volle Inhalt der Quelle
Annotation:
In the article researches mathematical models that ensure effectiveness of use combined methods for parts processing. The use of combined processing methods is always associated with the search for technological compromise and boils down to technical and economic indicators comparative assessment. In this case, it is necessary to rely on mathematical models that objectively reflect manufacturing parts technological processes. Mathematical methods and models for optimizing production processes for manufacturing parts that are applicable in combined methods for processing parts are a complex formalized scientific abstraction that describes production functioning process at all stages of its implementation. In the synthesis of various processing methods, it is necessary to ensure that a number of conditions are met that determine necessary and sufficient conditions for implementing feasibility a particular technology in the combined method of processing parts. Multiple regression analysis methods allow minimizing experiments number in mathematical model determining which adequate to processes under study and form the baseline data for the transition from multi-factor to multi-criteria models. Using this approach, it is necessary to determine objective function optimal values parameters and influence factors in each specific technological process, which will allow us to bring the uncertainty removal in the processing materials technology to a new qualitative level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Yan Shu. „Long Tube Hole Straightness Data Processing Method“. Advanced Materials Research 756-759 (September 2013): 1494–97. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1494.

Der volle Inhalt der Quelle
Annotation:
In this paper, applying planar straightness error assessment methods, in other words, the principle of minimum conditions and roundness error approach, further more, the least square circle will be used to the idea of space. By establishing the two space mathematical models to deal with the long-tube straightness error. It dramatically solves the problem to the space straightness error. In practice, the use of the ideas of the software can gain high accurate and high precise data, it can also consistent with the actual situation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Al Salaimeh, Safwan. „MATHEMATICAL MODELS FOR COMPUTERIZED CONTROL SYSTEM“. Gulustan-Black Sea Scientific Journal of Academic Research 48, Nr. 05 (05.07.2019): 119–23. http://dx.doi.org/10.36962/gbssjar119.

Der volle Inhalt der Quelle
Annotation:
The software is a set of mathematical methods, and algorithms of information processing, which used in creating the control system. When designing control systems, Initial data for the design of control system. The tasks of the computerized control system are understood as a part of the computerized functions of the computerized control system characterized by the outcomes and outputs in specific form. control function is: commutative action for computerized control system, aimed to achieve a criterion goal. Depending on the properties of the process and their mathematical description can be combined into different classes; This paper shows the designing the mathematical models which need to computerized control systems (models (3) – (8)). In the same time this paper shows the main methods which were used to formulate the mathematical models as: • Stochastic and deterministic; • One dimensional and multidimensional; • Linear and nonlinear; • Static and dynamic; • Stationary and non – stationary; • With distributed and lumped parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Rezaie-Balf, Mohammad, und Ozgur Kisi. „New formulation for forecasting streamflow: evolutionary polynomial regression vs. extreme learning machine“. Hydrology Research 49, Nr. 3 (27.03.2017): 939–53. http://dx.doi.org/10.2166/nh.2017.283.

Der volle Inhalt der Quelle
Annotation:
Abstract Streamflow forecasting is crucial in hydrology and hydraulic engineering since it is capable of optimizing water resource systems or planning future expansion. This study investigated the performances of three different soft computing methods, multilayer perceptron neural network (MLPNN), optimally pruned extreme learning machine (OP-ELM), and evolutionary polynomial regression (EPR) in forecasting daily streamflow. Data from three different stations, Soleyman Tange, Perorich Abad, and Ali Abad located on the Tajan River of Iran were used to estimate the daily streamflow. MLPNN model was employed to determine the optimal input combinations of each station implementing evaluation criteria. In both training and testing stages in the three stations, the results of comparison indicated that the EPR technique would generally perform more efficiently than MLPNN and OP-ELM models. EPR model represented the best performance to simulate the peak flow compared to MLPNN and OP-ELM models while the MLPNN provided significantly under/overestimations. EPR models which include explicit mathematical formulations are recommended for daily streamflow forecasting which is necessary in watershed hydrology management.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liang, Xin-Zhong, Hyun I. Choi, Kenneth E. Kunkel, Yongjiu Dai, Everette Joseph, Julian X. L. Wang und Praveen Kumar. „Surface Boundary Conditions for Mesoscale Regional Climate Models“. Earth Interactions 9, Nr. 18 (01.10.2005): 1–28. http://dx.doi.org/10.1175/ei151.1.

Der volle Inhalt der Quelle
Annotation:
Abstract This paper utilizes the best available quality data from multiple sources to develop consistent surface boundary conditions (SBCs) for mesoscale regional climate model (RCM) applications. The primary SBCs include 1) fields of soil characteristic (bedrock depth, and sand and clay fraction profiles), which for the first time have been consistently introduced to define 3D soil properties; 2) fields of vegetation characteristic fields (land-cover category, and static fractional vegetation cover and varying leaf-plus-stem-area indices) to represent spatial and temporal variations of vegetation with improved data coherence and physical realism; and 3) daily sea surface temperature variations based on the most appropriate data currently available or other value-added alternatives. For each field, multiple data sources are compared to quantify uncertainties for selecting the best one or merged to create a consistent and complete spatial and temporal coverage. The SBCs so developed can be readily incorporated into any RCM suitable for U.S. climate and hydrology modeling studies, while the data processing and validation procedures can be more generally applied to construct SBCs for any specific domain over the globe.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dozier, Jeff, und James Frew. „Computational provenance in hydrologic science: a snow mapping example“. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, Nr. 1890 (16.12.2008): 1021–33. http://dx.doi.org/10.1098/rsta.2008.0187.

Der volle Inhalt der Quelle
Annotation:
Computational provenance—a record of the antecedents and processing history of digital information—is key to properly documenting computer-based scientific research. To support investigations in hydrologic science, we produce the daily fractional snow-covered area from NASA's moderate-resolution imaging spectroradiometer (MODIS). From the MODIS reflectance data in seven wavelengths, we estimate the fraction of each 500 m pixel that snow covers. The daily products have data gaps and errors because of cloud cover and sensor viewing geometry, so we interpolate and smooth to produce our best estimate of the daily snow cover. To manage the data, we have developed the Earth System Science Server (ES 3 ), a software environment for data-intensive Earth science, with unique capabilities for automatically and transparently capturing and managing the provenance of arbitrary computations. Transparent acquisition avoids the scientists having to express their computations in specific languages or schemas in order for provenance to be acquired and maintained. ES 3 models provenance as relationships between processes and their input and output files. It is particularly suited to capturing the provenance of an evolving algorithm whose components span multiple languages and execution environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Hydrology Mathematical models Data processing"

1

Samper, Calvete F. Javier(Francisco Javier) 1958. „Statistical methods of analyzing hydrochemical, isotopic, and hydrological data from regional aquifers“. Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/191115.

Der volle Inhalt der Quelle
Annotation:
This dissertation is concerned with the development of mathematical aquifer models that combine hydrological, hydrochemical and isotopic data. One prerequisite for the construction of such models is that prior information about the variables and parameters be quantified in space and time by appropriate statistical methods. Various techniques using multivariate statistical data analyses and geostatistical methods are examined in this context. The available geostatistical methods are extended to deal with the problem at hand. In particular, a three-dimensional interactive geostatistical package has been developed for the estimation of intrinsic and nonintrinsic variables. This package is especially designed for groundwater applications and incorporates a maximum likelihood cross-validation method for estimating the parameters of the covariance function. Unique features of this maximum likelihood cross-validation method include: the use of an adjoint state method to compute the gradient of the likelihood function, the computation of the covariance of the parameter estimates and the use of identification criteria for the selection of a covariance model. In addition, it can be applied to data containing measurement errors, data regularized over variable lengths, and to nonintrinsic variables. The above methods of analysis are applied to synthetic data as well as hydrochemical and isotopic data from the Tucson aquifer in Arizona and the Madrid Basin in Spain. The dissertation also includes a discussion of the processes affecting the transport of dissolved constituents in groundwater, the mathematical formulation of the inverse solute transport problem and a proposed numerical method for its solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ma, Chunyan. „Mathematical security models for multi-agent distributed systems“. CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2568.

Der volle Inhalt der Quelle
Annotation:
This thesis presents the developed taxonomy of the security threats in agent-based distributed systems. Based on this taxonomy, a set of theories is developed to facilitate analyzng the security threats of the mobile-agent systems. We propose the idea of using the developed security risk graph to model the system's vulnerabilties.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ethington, Corinna A. „The robustness of LISREL estimates in structural equation models with categorical data“. Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54504.

Der volle Inhalt der Quelle
Annotation:
This study was an examination of the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical manifest variables. Two types of correlation matrices were analyzed; one containing Pearson product-moment correlations and one containing tetrachoric, polyserial, and product-moment correlations as appropriate. Using continuous variables generated according to the equations defining the population model, three cases were considered by dichotomizing some of the variables with varying degrees of skewness. When Pearson product-moment correlations were used to estimate associations involving dichotomous variables, the structural parameter estimates were biased when skewness was present in the dichotomous variables. Moreover, the degree of bias was consistent for both the maximum likelihood and unweighted least squares estimates. The standard errors of the estimates were found to be inflated, making significance tests unreliable. The analysis of mixed matrices produced average estimates that more closely approximated the model parameters except in the case where the dichotomous variables were skewed in opposite directions. However, since goodness-of-fit statistics and standard errors are not available in LISREL when tetrachoric and polyserial correlations are used, the unbiased estimates are not of practical significance. Until alternative computer programs are available that employ distribution-free estimation procedures that consider the skewness and kurtosis of the variables, researchers are ill-advised to employ LISREL in the estimation of structural equation models containing skewed categorical manifest variables.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Weed, Richard Allen. „Computational strategies for three-dimensional flow simulations on distributed computing systems“. Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/12154.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Witkowski, Walter Roy 1961. „SIMULATION ROUTINE FOR THE STUDY OF TRANSIENT BEHAVIOR OF CHEMICAL PROCESSES“. Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/276537.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sathisan, Shashi Kumar. „Encapsulation of large scale policy assisting computer models“. Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/101261.

Der volle Inhalt der Quelle
Annotation:
In the past two decades policy assisting computer models have made a tremendous impact in the analysis of national security issues and the analysis of problems in various government affairs. SURMAN (Survivability Management) is a policy assisting model that has been developed for use in national security planning. It is a large scale model formulated using the system dynamics approach of treating a problem in its entirety rather than in parts. In this thesis, an encapsulation of SURMAN is attempted so as to sharpen and focus its ability to perform policy/design evaluation. It is also aimed to make SURMAN more accessible to potential users and to provide a simple tool to the decision makers without having to resort to the mainframe computers. To achieve these objectives a personal/microcomputer version of SURMAN (PC SURMAN) and a series of curves relating inputs to outputs are developed. PC SURMAN reduces the complexity of SURMAN by dealing with generic aircraft. It details the essential survivability management parameters and their causal relationships through the life-cycle of aircraft systems. The model strives to link the decision parameters (inputs) to the measures of effectiveness (outputs). The principal decision variables identified are survivability, availability, and inventory of the aircraft system. The measures of effectiveness identified are the Increase Payload Delivered to Target Per Loss (ITDPL), Cost Elasticity of Targets Destroyed Per Loss (CETDPL), Combat Value Ratio (COMVR), Kill to Loss Ratio (KLR), and Decreased Program Life-Cycle Cost (DPLCC). The model provides an opportunity for trading off decision parameters. The trading off of survivability enhancement techniques and the defense budget allocation parameters for selecting those techniques/parameters with higher benefits and lower penalties are discussed. The information relating inputs to outputs for the tradeoff analysis is presented graphically using curves derived from experimentally designed computer runs.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yan, Hongxiang. „From Drought Monitoring to Forecasting: a Combined Dynamical-Statistical Modeling Framework“. PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/3292.

Der volle Inhalt der Quelle
Annotation:
Drought is the most costly hazard among all natural disasters. Despite the significant improvements in drought modeling over the last decade, accurate provisions of drought conditions in a timely manner is still one of the major research challenges. In order to improve the current drought monitoring and forecasting skills, this study presents a hybrid system with a combination of remotely sensed data assimilation based on particle filtering and a probabilistic drought forecasting model. Besides the proposed drought monitoring system through land data assimilation, another novel aspect of this dissertation is to seek the use of data assimilation to quantify land initial condition uncertainty rather than relying entirely on the hydrologic model or the land surface model to generate a single deterministic initial condition. Monthly to seasonal drought forecasting products are generated using the updated initial conditions. The computational complexity of the distributed data assimilation system required a modular parallel particle filtering framework which was developed and allowed for a large ensemble size in particle filtering implementation. The application of the proposed system is demonstrated with two case studies at the regional (Columbia River Basin) and the Conterminous United States. Results from both synthetic and real case studies suggest that the land data assimilation system significantly improves drought monitoring and forecasting skills. These results also show how sensitive the seasonal drought forecasting skill is to the initial conditions, which can lead to better facilitation of the state/federal drought preparation and response actions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hall, David Eric. „Transient thermal models for overhead current-carrying hardware“. Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17133.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shmeleva, Nataliya V. „Making sense of cDNA : automated annotation, storing in an interactive database, mapping to genomic DNA“. Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/25178.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

WEST, KAREN FRANCES. „AN EXTENSION TO THE ANALYSIS OF THE SHIFT-AND-ADD METHOD: THEORY AND SIMULATION (SPECKLE, ATMOSPHERIC TURBULENCE, IMAGE RESTORATION)“. Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188021.

Der volle Inhalt der Quelle
Annotation:
The turbulent atmosphere degrades images of objects viewed through it by introducing random amplitude and phase errors into the optical wavefront. Various methods have been devised to obtain true images of such objects, including the shift-and-add method, which is examined in detail in this work. It is shown theoretically that shift-and-add processing may preserve diffraction-limited information in the resulting image, both in the point source and extended object cases, and the probability of ghost peaks in the case of an object consisting of two point sources is discussed. Also, a convergence rate for the shift-and-add algorithm is established and simulation results are presented. The combination of shift-and-add processing and Wiener filtering is shown to provide excellent image restorations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Hydrology Mathematical models Data processing"

1

Stochastic subsurface hydrology. Englewood Cliffs, N.J: Prentice-Hall, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Computational hydraulics and hydrology: An illustrated dictionary. Boca Raton, Fla: CRC Press, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Krause, Richard E. Hydrology of the Floridan aquifer system in southeast Georgia and adjacent parts of Florida and South Carolina. [Reston, Va.?]: Dept. of the Interior, U.S. Geological Survey, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Frequency and risk analyses in hydrology. Littleton, Colo., U.S.A: Water Resources Publications, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kidd, Robert E. Application of the precipitation-runoff model in the Warrior coal field, Alabama. [Reston, Va.]: Dept. of the Interior, U.S. Geological Survey, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shu zi shui li: Application-oriented hydroinformatics. Beijing: Qing hua da xue chu ban she, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fürst, Josef. GIS in Hydrologie und Wasserwirtschaft. Heidelberg: H. Wichmann, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Krause, Richard E. Hydrology of the Floridan aquifer system in southeast Georgia and adjacent parts of Florida and South Carolina. Washington: U.S. G.P.O., 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Krause, Richard E. Hydrology of the Floridan aquifer system in southeast Georgia and adjacent parts of Florida and South Carolina. Washington, DC: Dept. of the Interior, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

International Conference on Computational Methods in Water Resources (13th 2000 Calgary, Alta.). Computational methods in water resources XIII: Proceedings of the XIII International Conference on Computational Methods in Water Resources, Calgary, Alberta, Canada, 25-29 June 2000. Herausgegeben von Bentley Laurence R. Rotterdam: A.A. Balkema, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Hydrology Mathematical models Data processing"

1

Méger, Nicolas, Edoardo Pasolli, Christophe Rigotti, Emmanuel Trouvé und Farid Melgani. „Satellite Image Time Series: Mathematical Models for Data Mining and Missing Data Restoration“. In Mathematical Models for Remote Sensing Image Processing, 357–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66330-2_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liao, Wenzhi, Jocelyn Chanussot und Wilfried Philips. „Remote Sensing Data Fusion: Guided Filter-Based Hyperspectral Pansharpening and Graph-Based Feature-Level Fusion“. In Mathematical Models for Remote Sensing Image Processing, 243–75. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66330-2_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Benediktsson, Jon A., Gabriele Cavallaro, Nicola Falco, Ihsen Hedhli, Vladimir A. Krylov, Gabriele Moser, Sebastiano B. Serpico und Josiane Zerubia. „Remote Sensing Data Fusion: Markov Models and Mathematical Morphology for Multisensor, Multiresolution, and Multiscale Image Classification“. In Mathematical Models for Remote Sensing Image Processing, 277–323. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66330-2_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Garza-Ulloa, Jorge. „Experiment design, data acquisition and signal processing“. In Applied Biomechatronics using Mathematical Models, 179–237. Elsevier, 2018. http://dx.doi.org/10.1016/b978-0-12-812594-6.00004-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wu, Xing, und Shaojian Zhuo. „Chinese Text Sentiment Analysis Utilizing Emotion Degree Lexicon and Fuzzy Semantic Model“. In Big Data, 1077–90. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9840-6.ch048.

Der volle Inhalt der Quelle
Annotation:
Text on the web has become a valuable source for mining and analyzing user opinions on any topic. Non-native English speakers heavily support the growing use of Network media especially in Chinese. Many sentiment analysis studies have shown that a polarity lexicon can effectively improve the classification consequences. Social media, where users spontaneously generated content have become important materials for tracking people's opinions and sentiments. Meanwhile, the mathematical models of fuzzy semantics have provided a formal explanation for the fuzzy nature of human language processing. This paper investigated the limitations of traditional sentiment analysis approaches and proposed an effective Chinese sentiment analysis approach based on emotion degree lexicon. Inspired by various social cognitive theories, basic emotion value lexicon and social evidence lexicon were combined to improve sentiment analysis consequences. By using the composite lexicon and fuzzy semantic model, this new sentiment analysis approach obtains significant improvement in Chinese text.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kaur, Harleen, Ritu Chauhan und M. Alam. „An Optimal Categorization of Feature Selection Methods for Knowledge Discovery“. In Data Mining, 92–106. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2455-9.ch005.

Der volle Inhalt der Quelle
Annotation:
With the continuous availability of massive experimental medical data has given impetus to a large effort in developing mathematical, statistical and computational intelligent techniques to infer models from medical databases. Feature selection has been an active research area in pattern recognition, statistics, and data mining communities. However, there have been relatively few studies on preprocessing data used as input for data mining systems in medical data. In this chapter, the authors focus on several feature selection methods as to their effectiveness in preprocessing input medical data. They evaluate several feature selection algorithms such as Mutual Information Feature Selection (MIFS), Fast Correlation-Based Filter (FCBF) and Stepwise Discriminant Analysis (STEPDISC) with machine learning algorithm naive Bayesian and Linear Discriminant analysis techniques. The experimental analysis of feature selection technique in medical databases has enable the authors to find small number of informative features leading to potential improvement in medical diagnosis by reducing the size of data set, eliminating irrelevant features, and decreasing the processing time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

S., Sathishkumar, Devi Priya R. und Karthika K. „Big Data Analytics in Cloud Platform“. In Advances in Web Technologies and Engineering, 159–79. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3111-2.ch010.

Der volle Inhalt der Quelle
Annotation:
Big data computing in clouds is a new paradigm for next-generation analytics development. It enables large-scale data organizations to share and explore large quantities of ever-increasing data types using cloud computing technology as a back-end. Knowledge exploration and decision-making from this rapidly increasing volume of data encourage data organization, access, and timely processing, an evolving trend known as big data computing. This modern paradigm incorporates large-scale computing, new data-intensive techniques, and mathematical models to create data analytics for intrinsic information extraction. Cloud computing emerged as a service-oriented computing model to deliver infrastructure, platform, and applications as services from the providers to the consumers meeting the QoS parameters by enabling the archival and processing of large volumes of rapidly growing data faster economy models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

K., Jayashree, und Abirami R. „Big Data Technologies and Management“. In Advances in Library and Information Science, 196–210. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5829-3.ch009.

Der volle Inhalt der Quelle
Annotation:
Developments in information technology and its prevalent growth in several areas of business, engineering, medical, and scientific studies are resulting in information as well as data explosion. Knowledge discovery and decision making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing. Big data has gained much attention from the academia and the IT industry. A new paradigm that combines large-scale compute, new data-intensive techniques, and mathematical models to build data analytics. Thus, this chapter discusses the background of big data. It also discusses the various application of big data in detail. The various related work and the future direction would be addressed in this chapter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Agrawal, Lalit, Alok Kumar, Jaya Nagori und Shirshu Varma. „Data Fusion in Wireless Sensor Networks“. In Technological Advancements and Applications in Mobile Ad-Hoc Networks, 341–72. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0321-9.ch019.

Der volle Inhalt der Quelle
Annotation:
Many strategies have been devised over the years for improving performance of wireless sensor networks with special consideration to energy efficiency, accuracy of the sensed data, increasing network lifetime, providing QoS, et cetera. Such techniques include design of various MAC protocols, routing algorithms, aggregation techniques, and many more. Data fusion has been developed to be one of such techniques that has a span of benefits across all the above mentioned performance criteria. The basic concept of data fusion is to apply various mathematical concepts on the raw sensor data to achieve an output with better fidelity and accuracy. Various mathematical, signal processing, and probabilistic techniques can be implemented to fuse the data. This might be done for decreasing the volume of the data, improving the precision of the data, deriving an inference from the raw data that was not present earlier, and many other similar functions. The motive of this chapter is to introduce data fusion and why it is important in case of wireless sensor networks. Further, it classifies data fusion, explains the major techniques, models, as well as the implementation aspects of data fusion, and concludes with the challenges and limitations posed for data fusion to be useful.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ryman-Tubb, Nick F. „Neural-Symbolic Processing in Business Applications“. In Computational Neuroscience for Advancing Artificial Intelligence, 270–314. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-021-1.ch012.

Der volle Inhalt der Quelle
Annotation:
Neural networks are mathematical models, inspired by biological processes in the human brain and are able to give computers more “human-like” abilities. Perhaps by examining the way in which the biological brain operates, at both the large-scale and the lower level anatomical level, approaches can be devised that can embody some of these remarkable abilities for use in real-world business applications. One criticism of the neural network approach by business is that they are “black boxes”; they cannot be easily understood. To open this black box an outline of neural-symbolic rule extraction is described and its application to fraud-detection is given. Current practice is to build a Fraud Management System (FMS) based on rules created by fraud experts which is an expensive and time-consuming task and fails to address the problem where the data and relationships change over time. By using a neural network to learn to detect fraud and then extracting its’ knowledge, a new approach is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Hydrology Mathematical models Data processing"

1

Wu, Dazhong, Connor Jennings, Janis Terpenny, Robert Gao und Soundar Kumara. „Data-Driven Prognostics Using Random Forests: Prediction of Tool Wear“. In ASME 2017 12th International Manufacturing Science and Engineering Conference collocated with the JSME/ASME 2017 6th International Conference on Materials and Processing. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/msec2017-2679.

Der volle Inhalt der Quelle
Annotation:
Manufacturers have faced an increasing need for the development of predictive models that help predict mechanical failures and remaining useful life of a manufacturing system or its system components. Model-based or physics-based prognostics develops mathematical models based on physical laws or probability distributions, while an in-depth physical understanding of system behaviors is required. In practice, however, some of the distributional assumptions do not hold true. To overcome the limitations of model-based prognostics, data-driven methods have been increasingly applied to machinery prognostics and maintenance management, transforming legacy manufacturing systems into smart manufacturing systems with artificial intelligence. While earlier work demonstrated the effectiveness of data-driven approaches, most of these methods applied to prognostics and health management (PHM) in manufacturing are based on artificial neural networks (ANNs) and support vector regression (SVR). With the rapid advancement in artificial intelligence, various machine learning algorithms have been developed and widely applied in many engineering fields. The objective of this research is to explore the ability of random forests (RFs) to predict tool wear in milling operations. The performance of ANNs, SVR, and RFs are compared using an experimental dataset. The experimental results have shown that RFs can generate more accurate predictions than ANNs and SVR in this experiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Japikse, David, Oleg Dubitsky, Kerry N. Oliphant, Robert J. Pelton, Daniel Maynes und Jamin Bitter. „Multi-Variable, High Order, Performance Models (2005C)“. In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-79416.

Der volle Inhalt der Quelle
Annotation:
In the course of developing advanced data processing and advanced performance models, as presented in companion papers, a number of basic scientific and mathematical questions arose. This paper deals with questions such as uniqueness, convergence, statistical accuracy, training, and evaluation methodologies. The process of bringing together large data sets and utilizing them, with outside data supplementation, is considered in detail. After these questions are focused carefully, emphasis is placed on how the new models, based on highly refined data processing, can best be used in the design world. The impact of this work on designs of the future is discussed. It is expected that this methodology will assist designers to move beyond contemporary design practices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gorman, Kevin J., und Kourosh J. Rahnamai. „Real-Time Data Acquisition and Controls Using Matlab“. In ASME 1995 15th International Computers in Engineering Conference and the ASME 1995 9th Annual Engineering Database Symposium collocated with the ASME 1995 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/cie1995-0774.

Der volle Inhalt der Quelle
Annotation:
Abstract Matlab is a powerful mathematical package that has numerous functions for engineering applications, such as signal processing. Simulink is an add-on package for Matlab that gives a graphical user interface for creating and simulating block diagrams. Matlab has another add-on package, the Real-Time Workshop, which is an interface between the data acquisition adapters and Simulink. This package adds real-time data acquisition, which allows data to be acquired and analyzed. The system design is done in a graphical interface using block models. A servo system was designed and modeled using the Real-time Workshop. Matlab is a powerful tool that can be used for real-time data acquisition and control systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rohde, Steve M., William J. Williams und Mitchell M. Rohde. „Application of Advanced Signal Processing Methods to Automotive Systems Testing“. In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59535.

Der volle Inhalt der Quelle
Annotation:
During the past twenty years there have been rapid developments in the creation and application of mathematical computer-based capabilities and tools (e.g., FEA) to simulate and synthesize vehicle systems. This has led to the concept of virtual product development. In parallel with the development of these tools, an equally sophisticated set of tools have been developed in the area of advanced signal processing. These tools, based upon mathematical and statistical modeling techniques, enable the extraction of useful information from data and have application throughout the entire vehicle creation process. Moreover, signal processing bridges the gap between the “virtual” and the “real” worlds — an extremely important concept that is changing the entire nature of what is thought of as “testing.” This paper discusses the use of advanced signal processing methods in vehicle creation with particular emphasis on its use in vehicle systems testing. Modern Time Frequency Analysis (TFA), a technique that was specifically designed to study transient signals and was in part pioneered by one of the authors (WJW), is highlighted. TFA expresses a signal jointly in time and frequency at very high resolution and as such can often provide profound insights. Applications of TFA to vehicle systems testing are presented related to Noise, Vibration, and Harshness (NVH) that enable sound quality analyses. For example, using TFA predictive models of consumer preferences for transient sounds that are useful to the automotive engineer in testing and modifying new vehicle subsystem designs are discussed. Other applications that are discussed deal with brake pedal feel, and characterizing vehicle crash signals. In the latter case TFA has resulted in some unique insights that were not provided by conventional statistical and mathematical analyses.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Allam, Sushmita L., Jean-Marie C. Bouteiller, Renaud Greget, Serge Bischoff, Michel Baudry und Theodore W. Berger. „EONS Synaptic Modeling Platform: Exploration of Mechanisms Regulating Information Processing in the CNS and Application to Drug Discovery“. In ASME 2008 3rd Frontiers in Biomedical Devices Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/biomed2008-38095.

Der volle Inhalt der Quelle
Annotation:
EONS modeling platform is a resourceful learning and research tool to study the mechanisms underlying the non-linear dynamics of synaptic transmission with the aid of mathematical models. Mathematical modeling of information processing in CNS pathways, in particular modeling of molecular events and synaptic dynamics, have not been extensively developed owing to the complex computations involved in integrating a multitude of parameters. In this paper, we discuss the development of a strategy to adapt the EONS synaptic modeling platform to a multi-node environment using a parallel computational framework to compute data intensive long simulations in a shorter time frame. We describe how this strategy can be applied to (i) determine the optimal values of the numerous parameters required for fitting experimental data, (ii) determine the impact of all parameters on various aspects of synaptic transmission (under normal conditions or conditions mimicking pathological conditions) and (iii) study the effects of exogenous molecules on both healthy and pathological synaptic models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Altybaev, A. N., A. B. Zhanbyrbaev, G. S. Almugambetova, B. C. Meskhi, D. V. Rudoy und A. V. Olshevskaya. „ARCHITECTURE OF DATABASE MODELS FOR VISУUALIZING RESULTS OF MONITORING EPIZOOTIC ANIMAL STATE“. In STATE AND DEVELOPMENT PROSPECTS OF AGRIBUSINESS. DSTU-PRINT, 2020. http://dx.doi.org/10.23947/interagro.2020.1.19-22.

Der volle Inhalt der Quelle
Annotation:
The organizational and technical aspects of creating databases are described in relation to the processes of applied veterinary medicine, in particular, to visualize the results of monitoring the epizootic state of animals. The conceptual architecture of the processes of knowledge integration in the formation of the database of the developed information system based on the template form, prepared on the basis of the results of the analysis of the subject area, is proposed, the algorithm of actions is structured, both for specialists in the subject area and specialists in IT technologies, as well as system analysts. The proposed methodological benchmark allows the formation of the most adequate database prototype in the Excel software environment, as well as logical models of attributes and their relationships. The structure of the output reports is compiled in accordance with the requirements of veterinary reporting and taking into account the business logic of mathematical and statistical data processing in order to establish the laws of the processes under study and build their models, which is necessary for the timely adoption of prognostic veterinary measures. The results can be useful in the development and implementation of modern information and communication technologies in real business processes in the field of applied veterinary medicine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Malik, Arif, John Wendel, Mark Zipf und Andrew Nelson. „A Reliability-Based Approach to Flatness Actuator Effectiveness in 20-High Rolling Mills“. In ASME 2012 International Manufacturing Science and Engineering Conference collocated with the 40th North American Manufacturing Research Conference and in participation with the International Conference on Tribology Materials and Processing. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/msec2012-7281.

Der volle Inhalt der Quelle
Annotation:
20-High rolling mills process high strength and/or very thin non-ferrous and ferrous metals using a complex, cluster arrangement of rolls. The 20-high roll cluster arrangement achieves specific flatness goals in the thin sheet by delivering maximum rolling pressure while minimizing the deflections of the small diameter rolls. 20-high mills also employ flatness control mechanisms with sophisticated actuators, such as those to shift intermediate rolls and deflect backup bearing shafts. The purpose of this is to compensate for variations in strip dimensional and mechanical properties which can cause poor flatness control quality from discrepancies in work-roll gap profile and distribution of rolling force. This suggests that the random property differences in the rolling parameters that substantially affect the flatness must be directly accounted for in flatness control algorithms in order to achieve strict flatness quality. The use of accurate mathematical models that account for the rolling pass target gage reduction can optimize the flatness control actuators and help gain an advantage in the thin gauge strip competitive global market. Based on the expected process parameter variations and nominal mill set-points (speed, tension, gage reduction, etc.), the mill’s process control computer should determine the probability that target flatness control quality will be met for a required length of strip. The process computer should then either modify the number of rolling passes or adjust the thickness reduction schedule before rolling begins to secure an improved flatness probability estimate if the probability of achieving target strip flatness is too low for the required deliverable quality. Therefore, this research integrates 1) 20-high roll-stack mill mathematical modeling, 2) probability distribution data for random important rolling parameters, 3) reliability-based models to predict the probability of achieving desired strip flatness, and 4) optimization examples. The results can be used to reduce wasted rolled metal from poor flatness before rolling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ilyuschenko, A. Ph, V. A. Okovity, S. P. Kundas und A. N. Kuz’menkov. „Modeling and Experimental Studies of Particles Velocity and Temperature in Plasma Spraying Processes“. In ITSC 1997, herausgegeben von C. C. Berndt. ASM International, 1997. http://dx.doi.org/10.31399/asm.cp.itsc1997p0657.

Der volle Inhalt der Quelle
Annotation:
Abstract Mathematical and computer models of movement and heating of particles in low pressure conditions are developed. The mathematical models are based on the molecular-kinetics theory of gases. A program complex for computer realization of models is developed. It contains a built-in data base of temperature dependent properties of substances, system of processing and graphic visualization of simulation results. For verification of the developed models, computer simulation and experimental measurments of Al2O3 particle temperature and velocity are conducted. These materials were sprayed in Plasma-Technik equipment at pressure 60 mBar in argon. Particle velocity was measured with a special optical device, particle temperature was defined by intensity radiation method. It was established that the developed models are adequate to real process (error of 5-8 %) and may be used for study and improvement of VPS processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ilyuschenko, A., V. Okovity, S. Kundas und V. Gurevich. „Simulation and Experimental Studies of Particles Interaction with Plasma Jet in Vacuum Plasma Spraying Processes“. In ITSC 2000, herausgegeben von Christopher C. Berndt. ASM International, 2000. http://dx.doi.org/10.31399/asm.cp.itsc2000p0229.

Der volle Inhalt der Quelle
Annotation:
Abstract Mathematical and computer models of movement and heating of particles in low pressure conditions are developed. The mathematical models are based on the molecular-kinetics theory of gases. A program complex for computer realization of models is developed. It contains a built-in data base of temperature dependent properties of substances, system of processing and graphic visualization of simulation results. For verification of the developed models, computer simulation and experimental measurements of Al2O3 particle temperature and velocity are conducted. These materials were sprayed with Plasma-Technik equipment at pressure 60 mBar in argon. Particle velocity was measured with a special optical device, particle temperature was defined by intensity radiation method. It was established that the developed models are adequate to real process (error of 5-8 %) and may be used for study and improvement of VPS processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Komarov, Oleg V., Viacheslav A. Sedunin, Vitaly L. Blinov und Alexander V. Skorochodov. „Parametrical Diagnostics of Gas Turbine Performance on Site at Gas Pumping Plants Based on Standard Measurements“. In ASME Turbo Expo 2014: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/gt2014-25392.

Der volle Inhalt der Quelle
Annotation:
The operation and maintenance of gas pumping units at the Gazprom transport systems are carried according to the current number of equivalent working hours of the gas turbine and the centrifugal natural gas compressor. Modern concepts of lean production requires maintenance procedures are to be done according to the current technical operating performance of units and its parametrical diagnostics. To meet these requirements an appropriate research project is ongoing at Ural Federal University. In this article a methodology for the technical performance estimation of GTU’s is proposed, verified and discussed. The method is based on processing of data gathered from standard thermodynamic measurements and therefore is applicable in the frame of most gas turbine units without major modifications. The method includes a verified high-order mathematical model based on the Gas dynamic function for the precise analytical description of turbomachinery aerodynamics. A correction factor is introduced for adjusting the mathematical model in case of a non-traditional (not optimised) design applied in the exact turbine design. Models are defined for different types of multi-shaft GT’s and an automated algorithm for calculation of the coefficients of technical performance of the overall unit is provided. A series of experiments showed convergence between traditional and new methods in effective power output and efficiency of units does not exceed 2%. Special software is designed for online monitoring of technical performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie