Добірка наукової літератури з теми "Bivariate frequency analysis"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Bivariate frequency analysis".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Bivariate frequency analysis"

1

Stamatatou, Nikoletta, Lampros Vasiliades, and Athanasios Loukas. "Bivariate Flood Frequency Analysis Using Copulas." Proceedings 2, no. 11 (August 3, 2018): 635. http://dx.doi.org/10.3390/proceedings2110635.

Повний текст джерела
Анотація:
Flood frequency estimation for the design of hydraulic structures is usually performed as a univariate analysis of flood event magnitudes. However, recent studies show that for accurate return period estimation of the flood events, the dependence and the correlation pattern among flood attribute characteristics, such as peak discharge, volume and duration should be taken into account in a multivariate framework. The primary goal of this study is to compare univariate and joint bivariate return periods of floods that all rely on different probability concepts in Yermasoyia watershed, Cyprus. Pairs of peak discharge with corresponding flood volumes are estimated and compared using annual maximum series (AMS) and peaks over threshold (POT) approaches. The Lyne-Hollick recursive digital filter is applied to separate baseflow from quick flow and to subsequently estimate flood volumes from the quick flow timeseries. Marginal distributions of flood peaks and volumes are examined and used for the estimation of typical design periods. The dependence between peak discharges and volumes is then assessed by an exploratory data analysis using K-plots and Chi-plots, and the consistency of their relationship is quantified by Kendall’s correlation coefficient. Copulas from Archimedean, Elliptical and Extreme Value families are fitted using a pseudo-likelihood estimation method, verified using both graphical approaches and a goodness-of-fit test based on the Cramér-von Mises statistic and evaluated according to the corrected Akaike Information Criterion. The selected copula functions and the corresponding joint return periods are calculated and the results are compared with the marginal univariate estimations of each variable. Results indicate the importance of the bivariate analysis in the estimation of design return period of the hydraulic structures.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Flamant, Julien, Nicolas Le Bihan, and Pierre Chainais. "Time–frequency analysis of bivariate signals." Applied and Computational Harmonic Analysis 46, no. 2 (March 2019): 351–83. http://dx.doi.org/10.1016/j.acha.2017.05.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mirakbari, M., A. Ganji, and S. R. Fallah. "Regional Bivariate Frequency Analysis of Meteorological Droughts." Journal of Hydrologic Engineering 15, no. 12 (December 2010): 985–1000. http://dx.doi.org/10.1061/(asce)he.1943-5584.0000271.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ziller, M., K. Frick, W. M. Herrmann, S. Kubicki, I. Spieweg, and G. Winterer. "Bivariate Global Frequency Analysis versus Chaos Theory." Neuropsychobiology 32, no. 1 (1995): 45–51. http://dx.doi.org/10.1159/000119211.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shiau, Jenq-Tzong, Hsin-Yi Wang, and Chang-Tai Tsai. "BIVARIATE FREQUENCY ANALYSIS OF FLOODS USING COPULAS1." Journal of the American Water Resources Association 42, no. 6 (December 2006): 1549–64. http://dx.doi.org/10.1111/j.1752-1688.2006.tb06020.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Joo, Kyung-Won, Ju-Young Shin, and Jun-Haeng Heo. "Bivariate Frequency Analysis of Rainfall using Copula Model." Journal of Korea Water Resources Association 45, no. 8 (August 31, 2012): 827–37. http://dx.doi.org/10.3741/jkwra.2012.45.8.827.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Razmkhah, Homa, Alireza Fararouie, and Amin Rostami Ravari. "Multivariate Flood Frequency Analysis Using Bivariate Copula Functions." Water Resources Management 36, no. 2 (January 2022): 729–43. http://dx.doi.org/10.1007/s11269-021-03055-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dong, N. Dang, V. Agilan, and K. V. Jayakumar. "Bivariate Flood Frequency Analysis of Nonstationary Flood Characteristics." Journal of Hydrologic Engineering 24, no. 4 (April 2019): 04019007. http://dx.doi.org/10.1061/(asce)he.1943-5584.0001770.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Volpi, E., and A. Fiori. "Design event selection in bivariate hydrological frequency analysis." Hydrological Sciences Journal 57, no. 8 (October 10, 2012): 1506–15. http://dx.doi.org/10.1080/02626667.2012.726357.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, L., and V. P. Singh. "Bivariate Flood Frequency Analysis Using the Copula Method." Journal of Hydrologic Engineering 11, no. 2 (March 2006): 150–64. http://dx.doi.org/10.1061/(asce)1084-0699(2006)11:2(150).

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Bivariate frequency analysis"

1

Marius, Matei. "A Contribution to Multivariate Volatility Modeling with High Frequency Data." Doctoral thesis, Universitat Ramon Llull, 2012. http://hdl.handle.net/10803/81072.

Повний текст джерела
Анотація:
La tesi desenvolupa el tema de la predicció de la volatilitat financera en el context de l’ús de dades d’alta freqüència, i se centra en una línia de recerca doble: proposar models alternatius que millorarien la predicció de la volatilitat i classificar els models de volatilitat ja existents com els que es proposen en aquesta tesi. Els objectius es poden classificar en tres categories. El primer consisteix en la proposta d’un nou mètode de predicció de la volatilitat que segueix una línia de recerca desenvolupada recentment, la qual apunta al fet de mesurar la volatilitat intradia, com també la nocturna. Es proposa una categoria de models realized GARCH bivariants. El segon objectiu consisteix en la proposta d’una metodologia per predir la volatilitat diària multivariant amb models autoregressius que utilitzen estimacions de volatilitat diària (i nocturna, en el cas dels bivariants), a més d’informació d’alta freqüència, quan se’n disposava. S’aplica l’anàlisi de components principals (ACP) a un conjunt de models de tipus realized GARCH univariants i bivariants. El mètode representa una extensió d’un model ja existent (PC-GARCH) que estimava un model GARCH multivariant a partir de l’estimació de models GARCH univariants dels components principals de les variables inicials. El tercer objectiu de la tesi és classificar el rendiment dels models de predicció de la volatilitat ja existents o dels nous, a més de la precisió de les mesures intradia que s’utilitzaven en les estimacions dels models. En relació amb els resultats, s’observa que els models EGARCHX, realized EGARCH i realized GARCH(2,2) obtenen una millor valoració, mentre que els models GARCH i no realized EGARCH obtenen uns resultats inferiors en gairebé totes les proves. Això permet concloure que el fet d’incorporar mesures de volatilitat intradia millora el problema de la modelització. Quant a la classificació dels models realized bivariants, s’observa que tant els models realized GARCH bivariant (en versions completes i parcials) com el model realized EGARCH bivariant obtenen millors resultats; els segueixen els models realized GARCH(2,2) bivariant, EGARCH bivariant I EGARCHX bivariant. En comparar les versions bivariants amb les univariants, amb l’objectiu d’investigar si l’ús de mesures de volatilitat nocturna a les equacions dels models millora l’estimació de la volatilitat, es mostra que els models bivariants superen els univariants. Els resultats proven que els models bivariants no són totalment inferiors als seus homòlegs univariants, sinó que resulten ser bones alternatives per utilitzar-los en la predicció, juntament amb els models univariants, per tal d’obtenir unes estimacions més fiables.
La tesis desarrolla el tema de la predicción de la volatilidad financiera en el contexto del uso de datos de alta frecuencia, y se centra en una doble línea de investigación: la de proponer modelos alternativos que mejorarían la predicción de la volatilidad y la de clasificar modelos de volatilidad ya existentes como los propuestos en esta tesis. Los objetivos se pueden clasificar en tres categorías. El primero consiste en la propuesta de un nuevo método de predicción de la volatilidad que sigue una línea de investigación recientemente desarrollada, la cual apunta al hecho de medir la volatilidad intradía, así como la nocturna. Se propone una categoría de modelos realized GARCH bivariantes. El segundo objetivo consiste en proponer una metodología para predecir la volatilidad diaria multivariante con modelos autorregresivos que utilizaran estimaciones de volatilidad diaria (y nocturna, en el caso de los bivariantes), además de información de alta frecuencia, si la había disponible. Se aplica el análisis de componentes principales (ACP) a un conjunto de modelos de tipo realized GARCH univariantes y bivariantes. El método representa una extensión de un modelo ya existente (PCGARCH) que calculaba un modelo GARCH multivariante a partir de la estimación de modelos GARCH univariantes de los componentes principales de las variables iniciales. El tercer objetivo de la tesis es clasificar el rendimiento de los modelos de predicción de la volatilidad ya existentes o de los nuevos, así como la precisión de medidas intradía utilizadas en las estimaciones de los modelos. En relación con los resultados, se observa que los modelos EGARCHX, realized EGARCH y GARCH(2,2) obtienen una mejor valoración, mientras que los modelos GARCH y no realized EGARCH obtienen unos resultados inferiores en casi todas las pruebas. Esto permite concluir que el hecho de incorporar medidas de volatilidad intradía mejora el problema de la modelización. En cuanto a la clasificación de modelos realized bivariantes, se observa que tanto los modelos realized GARCH bivariante (en versiones completas y parciales) como realized EGARCH bivariante obtienen mejores resultados; les siguen los modelos realized GARCH(2,2) bivariante, EGARCH bivariante y EGARCHX bivariante. Al comparar las versiones bivariantes con las univariantes, con el objetivo de investigar si el uso de medidas de volatilidad nocturna en las ecuaciones de los modelos mejora la estimación de la volatilidad, se muestra que los modelos bivariantes superan los univariantes. Los resultados prueban que los modelos bivariantes no son totalmente inferiores a sus homólogos univariantes, sino que resultan ser buenas alternativas para utilizarlos en la predicción, junto con los modelos univariantes, para lograr unas estimaciones más fiables.
The thesis develops the topic of financial volatility forecasting in the context of the usage of high frequency data, and focuses on a twofold line of research: that of proposing alternative models that would enhance volatility forecasting and that of ranking existing or newly proposed volatility models. The objectives may be disseminated in three categories. The first scope constitutes of the proposal of a new method of volatility forecasting that follows a recently developed research line that pointed to using measures of intraday volatility and also of measures of night volatility, the need for new models being given by the question whether adding measures of night volatility improves day volatility estimations. As a result, a class of bivariate realized GARCH models was proposed. The second scope was to propose a methodology to forecast multivariate day volatility with autoregressive models that used day (and night for bivariate) volatility estimates, as well as high frequency information when that was available. For this, the Principal Component algorithm (PCA) was applied to a class of univariate and bivariate realized GARCH-type of models. The method represents an extension of one existing model (PC GARCH) that estimated a multivariate GARCH model by estimating univariate GARCH models of the principal components of the initial variables. The third goal of the thesis was to rank the performance of existing or newly proposed volatility forecasting models, as well as the accuracy of the intraday measures used in the realized models estimations. With regards to the univariate realized models’ rankings, it was found that EGARCHX, Realized EGARCH and Realized GARCH(2,2) models persistently ranked better, while the non-realized GARCH and EGARCH models performed poor in each stance almost. This allowed us to conclude that incorporating measures of intraday volatility enhances the modeling problem. With respect to the bivariate realized models’ ranking, it was found that Bivariate Realized GARCH (partial and complete versions) and Bivariate Realized EGARCH models performed the best, followed by the Bivariate Realized GARCH(2,2), Bivariate EGARCH and Bivariate EGARCHX models. When the bivariate versions were compared to the univariate ones in order to investigate whether using night volatility measurements in the models’ equations improves volatility estimation, it was found that the bivariate models surpassed the univariate ones when specific methodology, ranking criteria and stocks were used. The results were mixed, allowing us to conclude that the bivariate models did not prove totally inferior to their univariate counterparts, proving as good alternative options to be used in the forecasting exercise, together with the univariate models, for more reliable estimates. Finally, the PC realized models and PC bivariate realized models were estimated and their performances were ranked; improvements the PC methodology brought in high frequency multivariate modeling of stock returns were also discussed. PC models were found to be highly effective in estimating multivariate volatility of highly correlated stock assets and suggestions on how investors could use them for portfolio selection were made.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Wei. "Numerical Modelling and Statistical Analysis of Ocean Wave Energy Converters and Wave Climates." Doctoral thesis, Uppsala universitet, Elektricitetslära, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-305870.

Повний текст джерела
Анотація:
Ocean wave energy is considered to be one of the important potential renewable energy resources for sustainable development. Various wave energy converter technologies have been proposed to harvest the energy from ocean waves. This thesis is based on the linear generator wave energy converter developed at Uppsala University. The research in this thesis focuses on the foundation optimization and the power absorption optimization of the wave energy converters and on the wave climate modelling at the Lysekil wave converter test site. The foundation optimization study of the gravity-based foundation of the linear wave energy converter is based on statistical analysis of wave climate data measured at the Lysekil test site. The 25 years return extreme significant wave height and its associated mean zero-crossing period are chosen as the maximum wave for the maximum heave and surge forces evaluation. The power absorption optimization study on the linear generator wave energy converter is based on the wave climate at the Lysekil test site. A frequency-domain simplified numerical model is used with the power take-off damping coefficient chosen as the control parameter for optimizing the power absorption. The results show a large improvement with an optimized power take-off damping coefficient adjusted to the characteristics of the wave climate at the test site. The wave climate modelling studies are based on the wave climate data measured at the Lysekil test site. A new mixed distribution method is proposed for modelling the significant wave height. This method gives impressive goodness of fit with the measured wave data. A copula method is applied to the bivariate joint distribution of the significant wave height and the wave period. The results show an excellent goodness of fit for the Gumbel model. The general applicability of the proposed mixed-distribution method and the copula method are illustrated with wave climate data from four other sites. The results confirm the good performance of the mixed-distribution and the Gumbel copula model for the modelling of significant wave height and bivariate wave climate.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Brunner, Manuela. "Hydrogrammes synthétiques par bassin et types d'événements. Estimation, caractérisation, régionalisation et incertitude." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAU003/document.

Повний текст джерела
Анотація:
L'estimation de crues de projet est requise pour le dimensionnement de barrages et de bassins de rétention, de même que pour la gestion des inondations lors de l’élaboration de cartes d’aléas ou lors de la modélisation et délimitation de plaines d’inondation. Généralement, les crues de projet sont définies par leur débit de pointe à partir d’une analyse fréquentielle univariée. Cependant, lorsque le dimensionnement d’ouvrages hydrauliques ou la gestion de crues nécessitent un stockage du volume ruisselé, il est également nécessaire de connaître les caractéristiques volume, durée et forme de l’hydrogramme de crue en plus de son débit maximum. Une analyse fréquentielle bivariée permet une estimation conjointe du débit de pointe et du volume de l’hydrogramme en tenant compte de leur corrélation. Bien qu’une telle approche permette la détermination du couple débit/volume de crue, il manque l’information relative à la forme de l’hydrogramme de crue. Une approche attrayante pour caractériser la forme de la crue de projet est de définir un hydrogramme représentatif normalisé par une densité de probabilité. La combinaison d’une densité de probabilité et des quantiles bivariés débit/volume permet la construction d’un hydrogramme synthétique de crue pour une période de retour donnée, qui modélise le pic d’une crue ainsi que sa forme. De tels hydrogrammes synthétiques sont potentiellement utiles et simples d’utilisation pour la détermination de crues de projet. Cependant, ils possèdent actuellement plusieurs limitations. Premièrement, ils reposent sur la définition d’une période de retour bivariée qui n’est pas univoque. Deuxièmement, ils décrivent en général le comportement spécifique d’un bassin versant en ne tenant pas compte de la variabilité des processus représentée par différents types de crues. Troisièmement, les hydrogrammes synthétiques ne sont pas disponibles pour les bassins versant non jaugés et une estimation de leurs incertitudes n’est pas calculée.Pour remédier à ces manquements, cette thèse propose des avenues pour la construction d’hydrogrammes synthétiques de projet pour les bassins versants jaugés et non jaugés, de même que pour la prise en compte de la diversité des types de crue. Des méthodes sont également développées pour la construction d’hydrogrammes synthétiques de crue spécifiques au bassin et aux événements ainsi que pour la régionalisation des hydrogrammes. Une estimation des diverses sources d’incertitude est également proposée. Ces travaux de recherche montrent que les hydrogrammes synthétiques de projet constituent une approche qui s’adapte bien à la représentation de différents types de crue ou d’événements dans un contexte de détermination de crues de projet. Une comparaison de différentes méthodes de régionalisation montre que les hydrogrammes synthétiques de projet spécifiques au bassin peuvent être régionalisés à des bassins non jaugés à l’aide de méthodes de régression linéaires et non linéaires. Il est également montré que les hydrogrammes de projet spécifiques aux événements peuvent être régionalisés à l’aide d’une approche d’indice de crue bivariée. Dans ce contexte, une représentation fonctionnelle de la forme des hydrogrammes constitue un moyen judicieux pour la délimitation de régions ayant un comportement hydrologique de crue similaire en terme de réactivité. Une analyse de l’incertitude a montré que la longueur de la série de mesures et le choix de la stratégie d’échantillonnage constituent les principales sources d’incertitude dans la construction d’hydrogrammes synthétiques de projet. Cette thèse démontre qu’une approche de crues de projet basée sur un ensemble de crues permet la prise en compte des différents types de crue et de divers processus. Ces travaux permettent de passer de l’analyse fréquentielle statistique de crues vers l’analyse fréquentielle hydrologique de crues permettant de prendre en compte les processus et conduisant à une prise de décision plus éclairée
Design flood estimates are needed in hydraulic design for the construction of dams and retention basins and in flood management for drawing hazard maps or modeling inundation areas. Traditionally, such design floods have been expressed in terms of peak discharge estimated in a univariate flood frequency analysis. However, design or flood management tasks involving storage, in addition to peak discharge, also require information on hydrograph volume, duration, and shape . A bivariate flood frequency analysis allows the joint estimation of peak discharge and hydrograph volume and the consideration of their dependence. While such bivariate design quantiles describe the magnitude of a design flood, they lack information on its shape. An attractive way of modeling the whole shape of a design flood is to express a representative normalized hydrograph shape as a probability density function. The combination of such a probability density function with bivariate design quantiles allows the construction of a synthetic design hydrograph for a certain return period which describes the magnitude of a flood along with its shape. Such synthetic design hydrographs have the potential to be a useful and simple tool in design flood estimation. However, they currently have some limitations. First, they rely on the definition of a bivariate return period which is not uniquely defined. Second, they usually describe the specific behavior of a catchment and do not express process variability represented by different flood types. Third, they are neither available for ungauged catchments nor are they usually provided together with an uncertainty estimate.This thesis therefore explores possibilities for the construction of synthetic design hydrographs in gauged and ungauged catchments and ways of representing process variability in design flood construction. It proposes tools for both catchment- and flood-type specific design hydrograph construction and regionalization and for the assessment of their uncertainty.The thesis shows that synthetic design hydrographs are a flexible tool allowing for the consideration of different flood or event types in design flood estimation. A comparison of different regionalization methods, including spatial, similarity, and proximity based approaches, showed that catchment-specific design hydrographs can be best regionalized to ungauged catchments using linear and nonlinear regression methods. It was further shown that event-type specific design hydrograph sets can be regionalized using a bivariate index flood approach. In such a setting, a functional representation of hydrograph shapes was found to be a useful tool for the delineation of regions with similar flood reactivities.An uncertainty assessment showed that the record length and the choice of the sampling strategy are major uncertainty sources in the construction of synthetic design hydrographs and that this uncertainty propagates through the regionalization process.This thesis highlights that an ensemble-based design flood approach allows for the consideration of different flood types and runoff processes. This is a step from flood frequency statistics to flood frequency hydrology which allows better-informed decision making
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Flamant, Julien. "Une approche générique pour l'analyse et le filtrage des signaux bivariés." Thesis, Ecole centrale de Lille, 2018. http://www.theses.fr/2018ECLI0008/document.

Повний текст джерела
Анотація:
Les signaux bivariés apparaissent dans de nombreuses applications (optique, sismologie, océanographie, EEG, etc.) dès lors que l'analyse jointe de deux signaux réels est nécessaire. Les signaux bivariés simples ont une interprétation naturelle sous la forme d'une ellipse dont les propriétés (taille, forme, orientation) peuvent évoluer dans le temps. Cette propriété géométrique correspondant à la notion de polarisation en physique est fondamentale pour la compréhension et l'analyse des signaux bivariés. Les approches existantes n'apportent cependant pas de description directe des signaux bivariés ou des opérations de filtrage en termes de polarisation. Cette thèse répond à cette limitation par l'introduction d'une nouvelle approche générique pour l'analyse et le filtrage des signaux bivariés. Celle-ci repose sur deux ingrédients essentiels : (i) le plongement naturel des signaux bivariés -- vus comme signaux à valeurs complexes -- dans le corps des quaternions H et (ii) la définition d'une transformée de Fourier quaternionique associée pour une représentation spectrale interprétable de ces signaux. L'approche proposée permet de définir les outils de traitement de signal usuels tels que la notion de densité spectrale, de filtrage linéaire ou encore de spectrogramme ayant une interprétation directe en termes d'attributs de polarisation. Nous montrons la validité de l'approche grâce à des garanties mathématiques et une implémentation numériquement efficace des outils proposés. Diverses expériences numériques illustrent l'approche. En particulier, nous démontrons son potentiel pour la caractérisation de la polarisation des ondes gravitationnelles
Bivariate signals appear in a broad range of applications (optics, seismology, oceanography, EEG, etc.) where the joint analysis of two real-valued signals is required. Simple bivariate signals take the form of an ellipse, whose properties (size, shape, orientation) may evolve with time. This geometric feature of bivariate signals has a natural physical interpretation called polarization. This notion is fundamental to the analysis and understanding of bivariate signals. However, existing approaches do not provide straightforward descriptions of bivariate signals or filtering operations in terms of polarization or ellipse properties. To this purpose, this thesis introduces a new and generic approach for the analysis and filtering of bivariate signals. It essentially relies on two key ingredients: (i) the natural embedding of bivariate signals -- viewed as complex-valued signals -- into the set of quaternions H and (ii) the definition of a dedicated quaternion Fourier transform to enable a meaningful spectral representation of bivariate signals. The proposed approach features the definition of standard signal processing quantities such as spectral densities, linear time-invariant filters or spectrograms that are directly interpretable in terms of polarization attributes. More importantly, the framework does not sacrifice any mathematical guarantee and the newly introduced tools admit computationally fast implementations. Numerical experiments support throughout our theoretical developments. We also demonstrate the potential of the approach for the nonparametric characterization of the polarization of gravitational waves
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Isola, Matteo. "A methodology for the bivariate hydrological characterization of flood waves for river-related flood risks assessment." Doctoral thesis, 2020. http://hdl.handle.net/2158/1206050.

Повний текст джерела
Анотація:
The Flood Directive 60/2007/EC requires the European countries to verify the effectiveness of existing flood defence infrastructure to mitigate flood risks. The current practice establishes that the river flood control structures must respect a basic requirement, usually consisting of a certain safe freeboard under a design peak flow rate corresponding to a certain probability of exceedance. This requirement has some critical issues. It is based on a univariate frequency analysis of only flood peak, and therefore it assumes a perfect correspondence between the probability of occurrence of the hydrological variable and the failure of the flood-control structure. The thesis aims to define a methodology to overcome these issues implementing a bivariate hydrological risk analysis for river-related flood risk. The methodology is mainly focused on the overtopping failure for river levees. River levees are the most common river flood-control structure. They are raised, predominantly earth, structures (also called dykes, digues or embankments). The overtopping failure of a river levee caused several flood disasters such as Elbe flood (2002), New Orleans flood (2005), Emilia Romagna Flood (2017) or Arkansan flood (2019). The proposed procedure is carried out through two steps: (i) the evaluation of hydrological failure related to the overtopping risk for a levee; (ii) the estimation of the probability of occurrence of the hydrological failure introducing the concept of the Bivariate Failure Return Period. The hydrological failure is determined considering the mutual interaction between a bivariate hydrological load of peak discharge (Q) and the volume of the hydrograph (V), the river conveyance, and the levee resistance with respect to overtopping. The bivariate hydrological load considers an approximation of the real bivariate distribution of Q and V, functional to determine the hydrological failure. The shape of the hydrographs is classified concerning the overtopping introducing the Overtopping Hydrograph Shape Index (OHSI). The hydrological failure condition is represented by a curve in Q-V space containing all the hydrographs causing the initiation of the damage. This curve demonstrates that not only the peak flow but also the volume of the hydrograph are essential variables to characterise the overtopping failure. The risk of overtopping failure is expressed by the probability of occurrence of the hydrological failure within a new interpretation of the return period. Because of the hydrological failure curve is a function in Q-V space, the return period is estimated in the bivariate framework. Several definitions of the bivariate return period are available, each of which gives a different interpretation of it. This critical issue is overcome introducing the Bivariate Failure Return Period. The Bivariate Failure Return Period assesses the probability of the failure curve of the hydraulic structure generating possible scenarios through a Monte Carlo simulation. Two case studies are presented to demonstrate the applicability of the methodology and the advantages of using it respect to the current practice. In the thesis, the methodology is also applied to the problem of flood damage estimation demonstrating the flexibility and the validity of the proposed procedure. In this case, the hydrological failure consists in equal-euro curve in Q-V space, which includes all the hydrographs causing the same euro flood damages in a site. The procedure proposed needs Q-V data at the target site where the risk is to be assessed but most of the sites are ungauged. This issue is overcome by testing the bivariate regional frequency analysis. It is applied to the case study of the entire Tuscany Region (Italy). By the bivariate regional analysis, the Q-V series can be estimated at ungauged sites, and the uncertainty is reduced at gauged sites.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Santhosh, D. "Frequency Analysis of Floods - A Nanoparametric Approach." Thesis, 2013. http://etd.iisc.ernet.in/2005/3426.

Повний текст джерела
Анотація:
Floods cause widespread damage to property and life in different parts of the world. Hence there is a paramount need to develop effective methods for design flood estimation to alleviate risk associated with these extreme hydrologic events. Methods that are conventionally considered for analysis of floods focus on estimation of continuous frequency relationship between peak flow observed at a location and its corresponding exceedance probability depicting the plausible conditions in the planning horizon. These methods are commonly known as at-site flood frequency analysis (FFA) procedures. The available FFA procedures can be classified as parametric and nonparametric. Parametric methods are based on the assumption that sample (at-site data) is drawn from a population with known probability density function (PDF). Those procedures have uncertainty associated with the choice of PDF and the method for estimation of its parameters. Moreover, parametric methods are ineffective in modeling flood data if multimodality is evident in their PDF. To overcome those artifacts, a few studies attempted using kernel based nonparametric (NP) methods as an alternative to parametric methods. The NP methods are data driven and they can characterize the uncertainty in data without prior assumptions as to the form of the PDF. Conventional kernel methods have shortcomings associated with boundary leakage problem and normal reference rule (considered for estimation of bandwidth), which have implications on flood quantile estimates. To alleviate this problem, focus of NP flood frequency analysis has been on development of new kernel density estimators (kdes). Another issue in FFA is that information on the whole hydrograph (e.g., time to the peak flow, volume of the flood flow and duration of the flood event) is needed, in addition to peak flow for certain applications. An option is to perform frequency analysis on each of the variables independently. However, these variables are not independent, and hence there is a need to perform multivariate analysis to construct multivariate PDFs and use the corresponding cumulative distribution functions (CDFs) to arrive at estimates of characteristics of design flood hydrograph. In this perspective, recent focus of flood frequency analysis studies has been on development of methods to derive joint distributions of flood hydrograph related variables in a nonparametric setting. Further, in real world scenario, it is often necessary to estimate design flood quantiles at target locations that have limited or no data. Regional Flood Frequency analysis (RFFA) procedures have been developed for use in such situations. These procedures involve use of a regionalization procedure for identification of a homogeneous group of watersheds that are similar to watershed of the target site in terms of flood response. Subsequently regional frequency analysis (RFA) is performed, wherein the information pooled from the group (region) forms basis for frequency analysis to construct a CDF (growth curve) that is subsequently used to arrive at quantile estimates at the target site. Though there are various procedures for RFFA, they are largely confined to only univariate framework considering a parametric approach as the basis to arrive at required quantile estimates. Motivated by these findings, this thesis concerns development of a linear diffusion process based adaptive kernel density estimator (D-kde) based methodologies for at-site as well as regional FFA in univariate as well as bivariate settings. The D-kde alleviates boundary leakage problem and also avoids normal reference rule while estimating optimal bandwidth by using Botev-Grotowski-Kroese estimator (BGKE). Potential of the proposed methodologies in both univariate and bivariate settings is demonstrated by application to synthetic data sets of various sizes drawn from known unimodal and bimodal parametric populations, and to real world data sets from India, USA, United Kingdom and Canada. In the context of at-site univariate FFA (considering peak flows), the performance of D- kde was found to be better when compared to four parametric distribution based methods (Generalized extreme value, Generalized logistic, Generalized Pareto, Generalized Normal), thirty-two ‘kde and bandwidth estimator’ combinations that resulted from application of four commonly used kernels in conjunction with eight bandwidth estimators, and a local polynomial–based estimator. In the context of at-site bivariate FFA considering ‘peakflow-flood volume’ and ‘flood duration-flood volume’ bivariate combinations, the proposed D-kde based methodology was shown to be effective when compared to commonly used seven copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and student’s-T copulas) and Gaussian kernel in conjunction with conventional as well as BGKE bandwidth estimators. Sensitivity analysis indicated that selection of optimum number of bins is critical in implementing D-kde in bivariate setting. In the context of univariate regional flood frequency analysis (RFFA) considering peak flows, a methodology based on D-kde and Index-flood methods is proposed and its performance is shown to be better when compared to that of widely used L-moment and Index-flood based method (‘regional L-moment algorithm’) through Monte-Carlo simulation experiments on homogeneous as well as heterogeneous synthetic regions, and through leave-one-out cross validation experiment performed on data sets pertaining to 54 watersheds in Godavari river basin, India. In this context, four homogeneous groups of watersheds are delineated in Godavari river basin using kernel principal component analysis (KPCA) in conjunction with Fuzzy c-means cluster analysis in L-moment framework, as an improvement over heterogeneous regions in the area (river basin) that are currently being considered by Central Water Commission, India. In the context of bivariate RFFA two methods are proposed. They involve forming site-specific pooling groups (regions) based on either L-moment based bivariate homogeneity test (R-BHT) or bivariate Kolmogorov-Smirnov test (R-BKS), and RFA based on D-kde. Their performance is assessed by application to data sets pertaining to stations in the conterminous United States. Results indicate that the R-BKS method is better than R-BHT in predicting quantiles of bivariate flood characteristics at ungauged sites, although the size of pooling groups formed using R-BKS is, in general, smaller than size of those formed using R-BHT. In general, the performance of the methods is found to improve with increase in size of pooling groups. Overall the results indicate that the D-kde always yields bona fide PDF (and CDF) in the context of univariate as well as bivariate flood frequency analysis, as probability density is nonnegative for all data points and integrates to unity for the valid range of the data. The performance of D-kde based at-site as well as regional FFA methodologies is found to be effective in univariate as well as bivariate settings, irrespective of the nature of population and sample size. A primary assumption underlying conventional FFA procedures has been that the time series of peak flow is stationarity (temporally homogeneous). However, recent studies carried out in various parts of the World question the assumption of flood stationarity. In this perspective, Time Varying Gaussian Copula (TVGC) based methodology is proposed in the thesis for flood frequency analysis in bivariate setting, which allows relaxing the assumption of stationarity in flood related variables. It is shown to be effective than seven commonly used stationary copulas through Monte-Carlo simulation experiments and by application to data sets pertaining to stations in the conterminous United States for which null hypothesis that peak flow data were non-stationary cannot be rejected.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sadri, Sara. "Frequency Analysis of Droughts Using Stochastic and Soft Computing Techniques." Thesis, 2010. http://hdl.handle.net/10012/5198.

Повний текст джерела
Анотація:
In the Canadian Prairies recurring droughts are one of the realities which can have significant economical, environmental, and social impacts. For example, droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency analysis is a technique for analyzing how frequently a drought event of a given magnitude may be expected to occur. In this study the state of the science related to frequency analysis of droughts is reviewed and studied. The main contributions of this thesis include development of a model in Matlab which uses the qualities of Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria of effective hydrological regions. In FCM each site has a degree of membership in each of the clusters. The algorithm developed is flexible to get number of regions and return period as inputs and show the final corrected clusters as output for most case scenarios. While drought is considered a bivariate phenomena with two statistical variables of duration and severity to be analyzed simultaneously, an important step in this study is increasing the complexity of the initial model in Matlab to correct regions based on L-comoments statistics (as apposed to L-moments). Implementing a reasonably straightforward approach for bivariate drought frequency analysis using bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two new classes of neural network and machine learning: Radial Basis Function (RBF) and Support Vector Machine Regression (SVM-R). These two techniques are selected based on their good reviews in literature in function estimation and nonparametric regression. The functionalities of RBF and SVM-R are compared with traditional nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization method in which catchments are first regionalized using FCMs is applied and its results are compared with the other three models. Drought data from 36 natural catchments in the Canadian Prairies are used in this study. This study provides a methodology for bivariate drought frequency analysis that can be practiced in any part of the world.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Bivariate frequency analysis"

1

Morris, C. D., and S. J. Calise. "Bivariate Analysis of Concurrent Flooding." In Hydrologic Frequency Modeling, 615–32. Dordrecht: Springer Netherlands, 1987. http://dx.doi.org/10.1007/978-94-009-3953-0_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Goodarzi, Ehsan, Mina Ziaei, and Lee Teang Shui. "Evaluation of Dam Overtopping Risk Based on Univariate and Bivariate Flood Frequency Analyses." In Introduction to Risk and Uncertainty in Hydrosystem Engineering, 123–41. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-5851-3_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Aviyente, Selin. "Multivariate Methods for Functional Connectivity Analysis." In The Oxford Handbook of EEG Frequency, 514—C21.P135. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780192898340.013.21.

Повний текст джерела
Анотація:
Abstract Conventional functional connectivity metrics such as correlation, coherence and phase synchrony are mostly limited to quantifying bivariate relationships in the brain. Thus, they cannot capture the whole brain dynamics, i.e. multivariate synchronization patterns, which are important for understanding the underlying oscillatory networks. In the past two decades, two complementary sets of approaches have been developed to address this issue. The first class of methods focus on computing metrics based on the spectrum of the pairwise bivariate connectivity matrix. The second class of methods have been inspired by progress in graph theory. Graph theoretic measures have emerged as an important way to characterize the topology of the functional connectivity network. In this chapter, we review both of these approaches illustrating the advantages of each class of metrics and discussing the shortcomings.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Palva, J. Matias, and Satu Palva. "Bivariate Functional Connectivity Measures for Within- and Cross-Frequency Coupling of Neuronal Oscillations." In The Oxford Handbook of EEG Frequency, 495–513. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780192898340.013.20.

Повний текст джерела
Анотація:
Abstract Rhythmic brain activity fluctuations, neuronal oscillations, are ubiquitous in spontaneous brain activity in electrophysiological recordings at all measurable scales and in all species. Oscillations and their dynamics are produced by various synaptic and cellular mechanisms, which shape the temporal properties of neuronal activity at the brain circuit level. At the systems level, the dynamics of neuronal oscillations and correlations in their activity time series—functional connectivity (FC)—play a fundamental role in behavior and cognitive functions in humans and animal models. The accurate estimation of the dynamics and FC of neuronal oscillations in electrophysiological recordings, is hence of critical importance. This chapter provides an overview of the essential features of the local (univariate) oscillatory signals and the corresponding analysis methods for estimating their dynamics. It outlines methods for estimating dynamics and bivariate (pairwise) FC of brain oscillations among electrophysiological time series. Finally, it discusses the limitations posed by the typical confounders in MEG and EEG data.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kar, Reshma, Amit Konar, and Aruna Chakraborty. "Detection of Music-Induced Emotion Changes by Functional Brain Networks." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 155–77. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3129-6.ch007.

Повний текст джерела
Анотація:
This chapter discusses emotions induced by music and attempts to detect emotional states based on regional interactions within the brain. The brain network theory largely attributes statistical measures of interdependence as indicators of brain region interactions/connectivity. In this paper, the authors studied two bivariate models of brain connectivity and employed thresholding based on relative values among electrode pairs, in order to give a multivariate flavor to these models. The experimental results suggest that thresholding the brain connectivity measures based on their relative strength increase classification accuracy by approximately 10% and 8% in time domain and frequency domain respectively. The results are based on emotion recognition accuracy obtained by decision tree based linear support vector machines, considering the thresholded connectivity measures as features. The emotions were categorized as fear, happiness, sadness, and relaxation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Turel, Vehbi. "The Use and Design of Supplementary Visuals for the Enhancement of Listening Skills in Hypermedia." In Advances in Public Policy and Administration, 268–91. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6248-3.ch016.

Повний текст джерела
Анотація:
This chapter investigates the attitudes and opinions (perceptions) of 43 language learners (LLs) towards the use of supplementary contextual visuals (SCVs) in a HME for enhancing listening skills as a part of Foreign Language Learning (FLL). Forty-three LLs' attitudes towards the use of SCVs are examined in 3 areas: (1) at the pre-listening stage in preparing for listening texts, (2) with talking-heads video at the while-listening stage, and (3) with audio-only at the post-listening stage. The study is both quantitative and qualitative in nature. The results are analysed with SPSS (i.e. descriptive statistics including frequency, percentage, valid percentages, and cumulative percentages; Spearman test in Bivariate; Chi-square Test in the Crosstabs analysis; Fisher's exact Test). The results reveal that the LLs are in favour of the use of SCVs (a) at the pre-listening stage for preparation, (b) with talking-heads video at the while-listening stage, and (c) with audio-only clips at the post-listening stage. The LLs believe that SCVs could benefit them in a variety of ways that could contribute to the enhancement of their listening. There are also some significant relationships between their perceptions and some independent variables.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Vazquez, Manuel. "REGRESSION ANALYSYS AND BIVARIATE FREQUENCY DISTRIBUTIONS OF DAILY SOLAR RADIATION DATA." In Renewable Energy, Technology and the Environment, 2755–59. Elsevier, 1992. http://dx.doi.org/10.1016/b978-0-08-041268-9.50065-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Knutsen, Ingrid Ruud, Unni Johnsrud, Stine Jessli Slorafoss, Antonie Grasmo Haugen, and Pål Joranger. "We Are No Better Than the Weakest Link: Nurses’ Experiences With Medication Management in Primary Healthcare." In Medication Safety in Municipal Health and Care Services, 367–90. Cappelen Damm Akademisk/NOASP, 2022. http://dx.doi.org/10.23865/noasp.172.ch17.

Повний текст джерела
Анотація:
Today patients are discharged earlier from hospital, and consequently, an increasing number of seriously ill patients are being followed up by the primary healthcare services, and use various medications. Errors in pharmaceutical treatment, which cause deaths and adverse events, are among the errors most frequently reported. In this study, we explored experience, competence and competence needs related to medication management among nurses in primary healthcare. One hundred and ten nurses working in four municipalities in southeastern Norway were invited to fill in a paper-based questionnaire, and 87 responded (79%). Bivariate and cross-table analyses were performed. Of these, 84% considered their medication management competence to be good or very good, but 70% of the nurses did not feel confident about drug interaction, and 45% were not confident about the effects and side effects of medication. Further, 55% had administered medication incorrectly or to the wrong patient (35%). The most common adverse event was to administer medication at the wrong time. The most common way to update one’s knowledge was by reading the Norwegian Pharmaceutical Product Compendium (95%), and through dialogue with colleagues and doctors (94%). Most of the nurses (75%–85%) expressed a need for more knowledge. There was little difference between nurses working in home nursing care and in nursing homes. Despite reporting a low incidence of errors, few nurses have taken part in formal training after qualifying. Our findings indicate a special need for structural measures to increase nurses’ competence related to medication and medication management in primary healthcare.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Bivariate frequency analysis"

1

Vanem, Erik. "Bivariate Regional Frequency Analysis of Sea State Conditions." In ASME 2021 40th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/omae2021-61988.

Повний текст джерела
Анотація:
Abstract This paper presents a bivariate regional frequency analysis applied to data of extreme significant wave heights and concurrent wave period over an area in the North Atlantic Ocean. It extends previous regional frequency analysis on significant wave height to the bivariate case where the joint distribution of significant wave height and zero up-crossing wave period are analysed. This is believed to be an important extension, as the joint distribution is typically needed for marine design and other ocean engineering applications. The analysis presented in this paper is based on a bivariate index-wave/period approach and assumes a common regional growth curve within homogeneous regions of the overall area. One of the main benefits of performing a regional frequency analysis as opposed to at-site analysis based on data from one location only is that more accurate predictions of extreme conditions can be obtained, due to the increased number of observations that will effectively be available. Moreover, results for locations within the regions where no observations are available can be obtained by interpolation of the index wave/period and utilizing the regional growth curve. This paper outlines the various steps and modelling choices involved in a bivariate regional frequency analysis and presents the results of such an analysis applied to 30 years of data covering the North Atlantic Ocean. Moreover, it is shown how environmental contours can be constructed based on the outcome of the bivariate RFA, corresponding to one particular definition of a bivariate return period.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Salleh, Norizzati, Fadhilah Yusof, and Zulkifli Yusop. "Bivariate copulas functions for flood frequency analysis." In ADVANCES IN INDUSTRIAL AND APPLIED MATHEMATICS: Proceedings of 23rd Malaysian National Symposium of Mathematical Sciences (SKSM23). Author(s), 2016. http://dx.doi.org/10.1063/1.4954612.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kwon, Young-Moon, Jeong-Woo Han, and Tae-Woong Kim. "Application of Bivariate Frequency Analysis for Estimating Design Rainfalls." In World Environmental and Water Resources Congress 2008. Reston, VA: American Society of Civil Engineers, 2008. http://dx.doi.org/10.1061/40976(316)616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Stankovic, Ljubisa, Milos Brajovic, Milos Dakovic, and Danilo Mandic. "Two-component bivariate signal decomposition based on time-frequency analysis." In 2017 22nd International Conference on Digital Signal Processing (DSP). IEEE, 2017. http://dx.doi.org/10.1109/icdsp.2017.8096048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Raynal-Villasenor, Jose A., and Jose D. Salas. "Using Bivariate Distributions for Flood Frequency Analysis Based on Incomplete Data." In World Environmental and Water Resources Congress 2008. Reston, VA: American Society of Civil Engineers, 2008. http://dx.doi.org/10.1061/40976(316)618.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Giustarini, L., S. Camici, A. Tarpanelli, L. Brocca, F. Melone, and T. Moramarco. "Dam Spillways Adequacy Evaluation through Bivariate Flood Frequency Analysis and Hydrological Continuous Simulation." In World Environmental and Water Resources Congress 2010. Reston, VA: American Society of Civil Engineers, 2010. http://dx.doi.org/10.1061/41114(371)241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

U. Sikder, Iftikhar, and James J. Ribero. "Application of Cross-Wavelet and Singular Value Decomposition on Covid-19 and Bio-Physical Data." In 11th International Conference on Embedded Systems and Applications (EMSA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120612.

Повний текст джерела
Анотація:
The paper examines the bivariate relationship between COVID-19 and temperature time series using Singular Value Decomposition (SVD) and continuous cross-wavelet analysis. The COVID-19 incidence data and the temperature data of the corresponding period were transformed using SVD into significant eigen-state vectors for each spatial unit. Wavelet transformation was performed to analyze and compare the frequency structure of the single and the bivariate time series. The result provides coherency measures in the ranges of time period for the corresponding spatial units. Additionally, wavelet power spectrum and paired wavelet coherence statistics and phase difference were estimated. The result suggests statistically significant coherency at various frequencies. It also indicates complex conjugate dynamic relationships in terms phases and phase differences.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dong, Sheng, and Jinjin Ning. "Applications of a Compound Distribution on Estimating Wind and Wave Parameters for Fixed Platforms Design." In 25th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/omae2006-92189.

Повний текст джерела
Анотація:
Based on the hindcast data of 21 storm processes, a Poisson bivariate Logistic extreme value distribution is proposed to estimate the joint probability of extreme wind speed and extreme significant wave height in the storms, the frequency of which can be described by a Poisson distribution. In order to calculate the structural response of an ocean platform, such as base shear, three methods are utilized, namely (I) traditional univariate frequency analysis method; (II) base shear return value method; (III) wind-wave joint probability method. Calculation results show that the proposed statistical model is suitable for the design of fixed platforms in the storm-influenced area.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Baghai-Wadji, A. R. "Material, geometry, and frequency independent bivariate universal functions for the analysis of mechanical and electrical loading effects in acoustic devices: A Fast-MoM approach." In 1999 IEEE Ultrasonics Symposium. Proceedings. International Symposium. IEEE, 1999. http://dx.doi.org/10.1109/ultsym.1999.849362.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ayala, Diego, Wilson Padilla, Bolivar Araujo, and Silvia Ayala. "Upthrust in Electrosubmersible Pumps and Failure Analysis Applied Chi Square Test, Case Study Ecuador." In SPE Middle East Artificial Lift Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/206920-ms.

Повний текст джерела
Анотація:
Abstract The present study was carried with the purpose to determine the relationship between the operation curves of electrical submersible pumps (ESP) with possible mechanical failures when more fluid is pumped than it was designed for, the study focuses on finding relevant information that allows to identify a trend in the occurrence of premature wear in the equipment, also by applying the Chi square test it is expected to determine the existence of the relationship between failures and overproduction, leaving a probabilistic basis to sustain this operational reality. The correct well selection is fundamental, ESPs must have technical, operational similarities and even produce from the same pay zone and from the same field, this uniformity creates a baseline that facilitates the analysis between failures and operational condition of the ESPs. The statistical study is carried out from the frequency of appearance of failures, these data are later used to apply Chi square and establish if there is dependence between failures and overproduction. Overproduction has a negative effect on the equipment, in the selected wells this condition is aggravated when the well reaches a %BSW greater than 92% causing an increase in failures of 40%, this is even evident in infant mortality failures at the beginning of ESP's operational life. In working conditions within the design range, the first failures appear up to the fourth month, but in upthrust this time is shortened to the first month after the installation of the pump. The operational challenges of the wells vary over time and this dynamic of requirements must be considered as a design factor because at some point the well will increase its water production rate and the production requirements will force to modify the operational variables demanding the equipment to work outside its maximum efficiency point. This work provides a bivariate probabilistic analysis which is focused on providing additional elements to affirm with criteria that overproduction affects ESP equipment. There is a wide discussion about the operating conditions that can cause failures in ESPs, this study identifies the critical value from which the percentage of BSW affects the equipment when pumps operate in upthrust, identifying this operating variable will allow the design of pumps that consider overproduction as one of its conditions to not affect the performance. It has been widely proven that working out of range affects equipment performance, however, there is no mathematical function that describes the relationship between upthrust and generated failures, this study performs a bivariate analysis using Chi square test to establish the correlation between failures and range of operation, thus providing elements that support this link and therefore to increase the knowledge about the ESPs behavior in real conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії