Academic literature on the topic 'Log-linear parameters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Log-linear parameters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Log-linear parameters"

1

ALBA, RICHARD D. "Interpreting the Parameters of Log-Linear Models." Sociological Methods & Research 16, no. 1 (August 1987): 45–77. http://dx.doi.org/10.1177/0049124187016001003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Evans, Robin J., and Thomas S. Richardson. "Marginal log-linear parameters for graphical Markov models." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 75, no. 4 (July 3, 2013): 743–68. http://dx.doi.org/10.1111/rssb.12020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haber, Michael. "Log-Linear Models for Linked Loci: Variances of Estimated Parameters." Biometrical Journal 30, no. 5 (January 19, 2007): 589–93. http://dx.doi.org/10.1002/bimj.4710300513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Danaher, Peter J. "A Log-Linear Model for Predicting Magazine Audiences." Journal of Marketing Research 25, no. 4 (November 1988): 356–62. http://dx.doi.org/10.1177/002224378802500403.

Full text
Abstract:
A log-linear model for predicting magazine exposure distributions is developed and its parameters are estimated by the maximum likelihood technique. The log-linear model is compared empirically with the best-found model for equal-insertion schedules, one of Leckenby and Kishi's Dirichlet multinomial models. For unequal-insertion schedules the log-linear model is compared with the popular Metheringham beta-binomial model. The results show that the log-linear model has significantly smaller prediction errors than either of the other models.
APA, Harvard, Vancouver, ISO, and other styles
5

Mukhopadhyay, Late Anis Chandra, and Rabindra Nath Das. "Inference on log-linear regression model parameters with composite autocorrelated errors." Model Assisted Statistics and Applications 10, no. 3 (July 20, 2015): 231–42. http://dx.doi.org/10.3233/mas-150327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gasmi, Soufiane. "Estimating parameters of a log-linear intensity for a repairable system." Applied Mathematical Modelling 37, no. 6 (March 2013): 4325–36. http://dx.doi.org/10.1016/j.apm.2012.09.050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Habib, Elsayed Ali. "Estimation of Log-Linear-Binomial Distribution with Applications." Journal of Probability and Statistics 2010 (2010): 1–13. http://dx.doi.org/10.1155/2010/423654.

Full text
Abstract:
Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.
APA, Harvard, Vancouver, ISO, and other styles
8

Janjic, Tomislav, Gordana Vuckovic, and Milenko Celap. "Theoretical consideration and application of the SP and SP' scales in RP chromatographic systems in which Everett’s equation is valid." Journal of the Serbian Chemical Society 67, no. 3 (2002): 179–86. http://dx.doi.org/10.2298/jsc0203179j.

Full text
Abstract:
It is shown that in the case of ODS and less polar modifiers the log k values are a linear function of the SP?parameters. This findings differ from earlier investigated systems, in which a linear dependence between log k and SP parameters (SP = log SP?) was found. Both linear relationships have been analyzed and the corresponding possible separation mechanisms have been considered. In addition, the advantages of normalization of both scales are shown and how then they can be applied in the investigation of substances congenerity.
APA, Harvard, Vancouver, ISO, and other styles
9

Bucca, Mauricio. "Heatmaps for Patterns of Association in log-Linear Models." Socius: Sociological Research for a Dynamic World 6 (January 2020): 237802311989921. http://dx.doi.org/10.1177/2378023119899219.

Full text
Abstract:
Log-linear models offer a detailed characterization of the association between categorical variables, but the breadth of their outputs is difficult to grasp because of the large number of parameters these models entail. Revisiting seminal findings and data from sociological work on social mobility, the author illustrates the use of heatmaps as a visualization technique to convey the complex patterns of association captured by log-linear models. In particular, turning log odds ratios derived from a model’s predicted counts into heatmaps makes it possible to summarize large amounts of information and facilitates comparison across models’ outcomes.
APA, Harvard, Vancouver, ISO, and other styles
10

Novak, Thomas P. "Log-Linear Trees: Models of Market Structure in Brand Switching Data." Journal of Marketing Research 30, no. 3 (August 1993): 267–87. http://dx.doi.org/10.1177/002224379303000301.

Full text
Abstract:
Log-linear trees restrict the log-linear model of quasi-symmetry so that parameters are interpretable as arc lengths in an additive tree. The tree representation can be interpreted further in terms of consumer heterogeneity, affording a dual interpretation in terms of both market structure and opportunities for market segmentation.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Log-linear parameters"

1

NICOLUSSI, FEDERICA. "Marginal parametrizations for conditional independence models and graphical models for categorical data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/43679.

Full text
Abstract:
The graphical models (GM) for categorical data are models useful to representing conditional independencies through graphs. The parametric marginal models for categorical data have useful properties for the asymptotic theory. This work is focused on nding which GMs can be represented by marginal parametrizations. Following theorem 1 of Bergsma, Rudas and Németh [9], we have proposed a method to identify when a GM is parametrizable according to a marginal model. We have applied this method to the four types of GMs for chain graphs, summarized by Drton [22]. In particular, with regard to the so-called GM of type II and GM of type III, we have found the subclasses of these models which are parametrizable with marginal models, and therefore they are smooth. About the so-called GM of type I and GM of type IV, in the literature it is known that these models are smooth and we have provided new proof of this result. Finally we have applied the mean results concerning the GM of type II on the EVS data-set.
APA, Harvard, Vancouver, ISO, and other styles
2

Sung, Chin-Hsiung, and 宋志雄. "Evaluation of Parameter Estimations in Log-Linear Model under Zero-Inflated Count Data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/46545339679568069406.

Full text
Abstract:
碩士
國立臺北大學
統計學系
103
More and more customers use credit cards or electronic purse to pay their bills instead of real money. Quite frequently, people have more than one credit card on average. Nevertheless, only a few credit cards are used. To analyze the consumer behavior in using credit cards, there exists many zeros. Such a data with many zeros are called zero-inflated count data. To deal with the excess zeros, Lambert (1992) proposed a zero-inflated Poisson distribution. The most popular model for consumer consumption behavior was proposed by Ehrenberg (1959) which is called the plain vanilla model. To take into account of excess zeros, Wu (2008) combined Beta distribution and the plain vanilla model and proposed a beta-binomial model. Based on the derivation in Wu (2008), this thesis proposes combining Beta distribution with logistic model to deal with excess zeros. To understand the sensitivity of the distributional assumption, Monte Carlo simulation is conducted. Under various settings, the absolute bias and the prediction error are used to evaluate the performance of the estimators. A real data is used to illustrate the feasibility of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Kuan Yi, and 余冠毅. "Some discussions on the performance of parameter estimations of the log-linear model and its extended models under the zero-inflated count data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/72534318388068624846.

Full text
Abstract:
碩士
國立臺北大學
統計學系
103
The most common parametric assumption for analyzing count data is Poisson distribution. However, it constructs under the assumption that the data have features that the mean equals variance. Nowadays, owing to there rapid development in technology storage, data are abundant and come from many different sources. In turn, the data no longer have the feature that the mean equals variance. Adding a new dispersion parameter in Poisson distribution, Consul and Jain (1970) proposed the generalized Poisson distribution. Mullahy (1986) suggested combining the Bernoulli and Poisson distribution to take into account the excess zeros in the data, which is called the zero-inflated Poisson distribution. The generalized linear model is often used to model the association between count data and potential covariates. The model is often constructed under Poisson distribution and log link assumption. However, the assumption of having the same mean and variance is violated, the Poisson assumption is relaxed to the Generalized Poisson or zip-inflated Poisson. Since Poisson distribution is relatively simple and easy to make statistical inference, the purpose of this thesis is then to evaluate the sensitivity of the distribution assumption on different data types using Monte Carlo simulations. 4 different types of data along with many simulation settings are generated. The parameter estimators of the generalized linear model under 4 different distribution assumptions are obtained. The sensitivity is assessed through the bias of the estimates and the mean square error.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Log-linear parameters"

1

Back, Kerry E. Representative Investors. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190241148.003.0007.

Full text
Abstract:
There is a representative investor at any Pareto optimal competitive equilibrium. If investors have linear risk tolerance with the same cautiousness parameter, then there is a representative investor with the same utility function. When there is a representative investor, there is a factor model with the representative investor’s marginal utility of consumption as the factor. If the representative investor has constant relative risk aversion, then the risk‐free return and log equity premium can be calculated in terms of moments of aggregate consumption. The equity premium and risk‐free rate puzzles are explained. The coskewness‐cokurtosis pricing model and the Rubinstein option pricing model are derived.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Log-linear parameters"

1

Balakrishnan, N., and P. S. Chan. "3 Log-gamma order statistics and linear estimation of parameters." In Handbook of Statistics, 61–83. Elsevier, 1998. http://dx.doi.org/10.1016/s0169-7161(98)17005-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ayad, Amel, Fabrice Mutelet, and Amina Negadi. "Temperature-Dependent Linear Solvation Energy Relationship for the Determination of Gas-Liquid Partition Coefficients of Organic Compounds in Ionic Liquids." In Recent Advances in Gas Chromatography. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102733.

Full text
Abstract:
In this work, a new group contribution method was used for calculating gas-to-ionic liquid partition coefficients (log KL) of molecular solutes in ILs with a temperature-dependent linear solvation energy relationship. About 36 group parameters are used to correlate 14,762 log KL data points of organic compounds in ionic liquids. The experimental log KL data have been collected from the published literature for different solutes in ionic liquids at different temperatures within the range of 293.15–396.35 K. The calculated log KL data showed a satisfactory agreement with experimental data with an average absolute relative deviation (AARD) of 6.39%.
APA, Harvard, Vancouver, ISO, and other styles
3

"Parameter Interpretation and Significance Tests." In Log-Linear Modeling, 133–60. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118391778.ch7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Conrad Koch, Michael, Kazunori Fujisawa, and Akira Murakami. "Numerical Gradient Computation for Simultaneous Detection of Geometry and Spatial Random Fields in a Statistical Framework." In Inverse Problems - Recent Advances and Applications [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.108363.

Full text
Abstract:
The target of this chapter is the evaluation of gradients in inverse problems where spatial field parameters and geometry parameters are treated separately. Such an approach can be beneficial especially when the geometry needs to be detected accurately using L2-norm-based regularization. Emphasis is laid upon the computation of the gradients directly from the governing equations. Working in a statistical framework, the Karhunen-Loève (K-L) expansion is used for discretization of the spatial random field and inversion is done using the gradient-based Hamiltonian Monte Carlo (HMC) algorithm. The HMC gradients involve sensitivities w.r.t the random spatial field and geometry parameters. Building on a method developed by the authors, a procedure is developed which considers the gradients of the associated integral eigenvalue problem (IEVP) as well as the interaction between the gradients w.r.t random spatial field parameters and the gradients w.r.t the geometry parameters. The same mesh and linear shape functions are used in the finite element method employed to solve the forward problem, the artificial elastic deformation problem and the IEVP. Analysis of the rate of convergence using seven different meshes of increasing density indicates a linear rate of convergence of the gradients of the log posterior.
APA, Harvard, Vancouver, ISO, and other styles
5

Kamaraj A., Selva Nidhyananthan S, and Kalyana Sundaram C. "Voice Biometric for Learner Authentication." In Biometric Authentication in Online Learning Environments, 150–81. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7724-9.ch007.

Full text
Abstract:
The objective of this chapter is to verify the identity of the claimed learner by extracting the prosodic features of the speech signal. TIMIT Acoustic-Phonetic Continuous Speech Corpus is used for learner verification using prosodic and articulation features such as energy, pitch, and formants. The prosodic feature includes pitch (F0), and articulation feature includes formants (F1-F7). From this database, for this project in the training phase, 200 learners were used and in the testing phase 160 learners were used. The pitch and formants were extracted using linear predictive analysis. The first seven formants were used for verification purpose. The feature set consists of eight features. The features are fed into the Guassian mixture model. In the Gaussian mixture model, parameters are estimated from the training and testing data using the iterative expectation-maximization. Log likelihood score is computed using these parameters, and then these scores are normalized to make decisions. The decision is made based on the threshold.
APA, Harvard, Vancouver, ISO, and other styles
6

Ordóñez, Diego, Carlos Dafonte, Bernardino Arcay, and Minia Manteiga. "Connectionist Systems and Signal Processing Techniques Applied to the Parameterization of Stellar Spectra." In Soft Computing Methods for Practical Environment Solutions, 187–203. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-893-7.ch012.

Full text
Abstract:
A stellar spectrum is the finger-print identification of a particular star, the result of the radiation transport through its atmosphere. The physical conditions in the stellar atmosphere, its effective temperature, surface gravity, and the presence and abundance of chemical elements explain the observed features in the stellar spectra, such as the shape of the overall continuum and the presence and strength of particular lines and bands. The derivation of the atmospheric stellar parameters from a representative sample of stellar spectra collected by ground-based and spatial telescopes is essential when a realistic view of the Galaxy and its components is to be obtained. In the last decade, extensive astronomical surveys recording information of large portions of the sky have become a reality since the development of robotic or semi-automated telescopes. The Gaia satellite is one of the key missions of the European Space Agency (ESA) and its launch is planned for 2011. Gaia will carry out the so-called Galaxy Census by extracting precise information on the nature of its main constituents, including the spectra of objects (Wilkinson, 2005). Traditional methods for the extraction of the fundamental atmospheric stellar parameters (effective temperature (Teff), gravity (log G), metallicity ([Fe/H]), and abundance of alpha elements [a/Fe], elements integer multiples of the mass of the helium nucleus) are time-consuming and unapproachable for a massive survey involving 1 billion objects (about 1% of the Galaxy constituents) such as Gaia. This work presents the results of the authors’ study and shows the feasibility of an automated extraction of the previously mentioned stellar atmospheric parameters from near infrared spectra in the wavelength region of the Gaia Radial Velocity Spectrograph (RVS). The authors’ approach is based on a technique that has already been applied to problems of the non-linear parameterization of signals: artificial neural networks. It breaks ground in the consideration of transformed domains (Fourier and Wavelet Transforms) during the preprocessing stage of the spectral signals in order to select the frequency resolution that is best suited for each atmospheric parameter. The authors have also progressed in estimating the noise (SNR) that blurs the signal on the basis of its power spectrum and the application of noise-dependant algorithms of parameterization. This study has provided additional information that allows them to progress in the development of hybrid systems devoted to the automated classification of stellar spectra.
APA, Harvard, Vancouver, ISO, and other styles
7

Zimeras, S., and Y. Matsinos. "Modelling Spatial Medical Data." In Effective Methods for Modern Healthcare Service Quality and Evaluation, 75–89. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9961-8.ch004.

Full text
Abstract:
Models are sometimes incomplete, especially in scaling data where other information of large regions needs to be predicted by smaller ones. Uncertainty analysis is the process of assessing uncertainty in modelling or scaling to identify major uncertainty sources, quantify their degree and relative importance, examine their effects on model output under different scenarios, and determine prediction accuracy. Especially for large dimensional data where spatial process in regional investigation are difficult to applied due to incompleteness leading us to spatial heterogeneity and non-linearity of our data. Modelling the uncertainty particular in scaling data starts with a general structure (linear most of the time) that explains as accurate as it is the real data and the model is built through adding variables, which are significant or which aid in prediction (hierarchical modelling). Parameter estimation is an important issue for the evaluation of these proposed models. Statistical techniques based on the spatial modelling could be proposed to overcome the problem of dimensionality and the spatial homogeneity between different grains levels based on the neighbourhood structure of the regions with similar characteristics. Investigation of the neighbourhood structure analysis could be applied using kriging or variogram techniques. In this work, we introduce and analyse methodologies for scaling data under uncertainty where incomplete data can be explained by spatial modelling at different scales. Incomplete data of uncertainties in regions involve spatial homogeneity upon neighbourhood structure between regions. The last could be illustrated by using spatial modelling techniques (like spatial autocorrelation, partition functions, and multilevel models). Parameter estimation of these models could be achieved by using stochastic (spatial hierarchical models, kriging, auto-correlation) methods. Comparison between different models could be achieved by considering statistical measures like log-likelihood ratio test. The best model is the one, which explains better the real data.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Log-linear parameters"

1

"Optimization of Log-linear Machine Translation Model Parameters Using SVMs." In 8th International Workshop on Pattern Recognition in Information Systems. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001739000480056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pratikno, Helmi, W. John Lee, and Cesario K. Torres. "Application of Multiple Diagnostic Plots to Identify End of Linear Flow in Unconventional Reservoirs." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/205906-ms.

Full text
Abstract:
Abstract This paper presents a method to identify switch time from end of linear flow (telf) to transition or boundary-dominated flow (BDF) by utilizing multiple diagnostic plots including a Modified Fetkovich type curve (Eleiott et al. 2019). In this study, we analyzed publicly available production data to analyze transient linear flow behavior and boundary-dominated flow from multiple unconventional reservoirs. This method applies a log-log plot of rate versus time combined with a log-log plot of rate versus material balance time (MBT). In addition to log-log plots, a specialized plot of rate versus square root of time is used to confirm telf. A plot of MBT versus actual time, t, is provided to convert material balance time to actual time, and vice versa. The Modified Fetkovich type curve is used to estimate decline parameters and reservoir properties. Applications of this method using monthly production data from Bakken and Permian Shale areas are presented in this work. Utilizing public data, our comprehensive review of approximately 800 multi-staged fractured horizontal wells (MFHW) from North American unconventional reservoirs found many of them exhibiting linear flow production characteristics. To identify end of linear flow, a log-log plot of rate versus time alone is not sufficient, especially when a well is not operated in a consistent manner. This paper shows using additional diagnostic plots such as rate versus MBT and specialized plots can assist interpreters to better identify end of linear flow. With the end of linear flow determined for these wells, the interpreter can use the telf to forecast future production and estimate reservoir properties using the modified type curve. These diagnostic plots can be added to existing production analysis tools so that engineers can analyze changes in flow regimes in a timely manner, have better understanding of how to forecast their wells, and reduce the uncertainty in estimated ultimate recoveries related to decline parameters.
APA, Harvard, Vancouver, ISO, and other styles
3

Wei, Zhigang, Richard C. Rice, Masataka Yatomi, and Kamran M. Nikbin. "The Equivalency-Based Linear Regression Method for Statistical Analysis of Creep/Fatigue Data." In ASME 2010 Pressure Vessels and Piping Division/K-PVP Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/pvp2010-25691.

Full text
Abstract:
Materials scientists and mechanical engineers working on structural integrity are making increasing use of statistical analysis in interpreting creep/fatigue data as they contain an inherent scatter which cannot be substantially reduced even under controlled testing conditions. In practice, in most cases the uniaxial failure or cracking data can be reasonably approximated by a straight line on log-log coordinates, indicating that there is a linear log relationship with the appropriate correlating parameter. Linear regression is the most used method in statistical data analysis and is being recommended in American Society for Testing and Materials (ASTM), British Standards (BS), Det Norske Veritas (DNV) and many other engineering standards. Recently, the current practice on linear regression as adopted by the engineering standards has been critically reviewed, and the shortcomings of these procedures have been clearly demonstrated. A new statistical method based on the equivalency between all variables involved has been proposed for S-N curve analysis. In this paper, a large amount of creep and fatigue data of engineering materials collected from several well known data bases generated in US, Europe, and Japan are systematically analyzed with conventional standard and the new equivalency method. The results are compared and discussed. Finally, a recommendation to improve the fitting parameters taking into account of the scatter in both axes is presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Crews, John H., and Ralph C. Smith. "Density Function Optimization for the Homogenized Energy Model of Shape Memory Alloys." In ASME 2011 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. ASMEDC, 2011. http://dx.doi.org/10.1115/smasis2011-5036.

Full text
Abstract:
In this paper, we present two methods for optimizing the density functions in the homogenized energy model (HEM) of shape memory alloys (SMA). The density functions incorporate the polycrystalline behavior of SMA by accounting for material inhomogeneities and localized interaction effects. One method represents the underlying densities for the relative stress and interaction stress as log-normal and normal probability density functions, respectively. The optimal parameters in the underlying densities are found using a genetic algorithm. A second method represents the densities as a linear parameterization of log-normal and normal probability density functions. The optimization algorithm determines the optimal weights of the underlying densities. For both cases, the macroscopic model is integrated over the localized constitutive behavior using these densities. In addition, the estimation of model parameters using experimental data is described. Both optimized models accurately and efficiently quantify the SMA’s hysteretic dependence on stress and temperature, making the model suitable for use in real-time control algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Zhilei, Yunjiang Cui, Sainan Xu, and Chao Ma. "Experimental Research on Mechanical Properties for Hydraulic Fracture Design of Weak Sands: A Case Study in Bohai." In Offshore Technology Conference. OTC, 2022. http://dx.doi.org/10.4043/32072-ms.

Full text
Abstract:
Abstract In recent years, in order to improve the oil production of unconsolidated sand reservoirs, hydraulic fracture through the screen liner has been carried out in the Bohai oilfield. Traditional hydraulic fracture design methods usually assume the rock is elastic, while the weak sands are often nonlinear elastic rocks. This study investigates how to optimize the mechanical parameters used in the hydraulic fracture design to best approximate the rock elastic properties under in-situ formation conditions, and show how to derive them from well logs. We performed uniaxial strain and triaxial stress compression experiments on five and seven groups of core samples, which were from well P6 and well P8, respectively. Each group of samples had five plugs with similar depth, one of which was designed for the uniaxial strain experiment and the other four for the triaxial stress experiment. Linear regression analyses and extrapolations were carried out for each set of core data to find the proper mechanical parameters for the fracturing design. The quantitative conversion formulas between core analysis and well log-derived results of these moduli were eventually established. The reservoir rocks of P oilfield located in Bohai are relatively weak and have low stiffness. Mechanical experiments show that there is a linear correlation between deformation modulus and effective confining pressure. Core data analyses indicate that constrained modulus provides the best approximation of rock modulus under initial reservoir conditions. And its value is the ratio of stress to strain (from initial linear data) in the uniaxial strain experiment. Therefore, it is recommended to use the constrained modulus as the mechanical parameter in hydraulic fracture design. Meanwhile, there is a strong linear relationship between constrained modulus and Young's modulus, and the former is roughly six times greater than the latter. Combining the conversion between core and well log-derived Young's modulus, it is possible to estimate constrained modulus from well logs. The method proposed in this research was used for the fracturing design of well P45. After fracturing, the daily oil production increased from 399.3 to 2,462.9ft3, and the liquid production increased four times. Based on the studies of rock mechanics laboratory data, we propose that constrained modulus is the appropriate parameter in the hydraulic fracture design of soft sediments. And it has been verified by actual production data. The new method provides a reliable reference for the hydraulic fracture design of weakly consolidated sands in Bohai and other similar oilfields.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Zhilei, Yunjiang Cui, Sainan Xu, and Chao Ma. "Experimental Research on Mechanical Properties for Hydraulic Fracture Design of Weak Sands: A Case Study in Bohai." In Offshore Technology Conference. OTC, 2022. http://dx.doi.org/10.4043/32072-ms.

Full text
Abstract:
Abstract In recent years, in order to improve the oil production of unconsolidated sand reservoirs, hydraulic fracture through the screen liner has been carried out in the Bohai oilfield. Traditional hydraulic fracture design methods usually assume the rock is elastic, while the weak sands are often nonlinear elastic rocks. This study investigates how to optimize the mechanical parameters used in the hydraulic fracture design to best approximate the rock elastic properties under in-situ formation conditions, and show how to derive them from well logs. We performed uniaxial strain and triaxial stress compression experiments on five and seven groups of core samples, which were from well P6 and well P8, respectively. Each group of samples had five plugs with similar depth, one of which was designed for the uniaxial strain experiment and the other four for the triaxial stress experiment. Linear regression analyses and extrapolations were carried out for each set of core data to find the proper mechanical parameters for the fracturing design. The quantitative conversion formulas between core analysis and well log-derived results of these moduli were eventually established. The reservoir rocks of P oilfield located in Bohai are relatively weak and have low stiffness. Mechanical experiments show that there is a linear correlation between deformation modulus and effective confining pressure. Core data analyses indicate that constrained modulus provides the best approximation of rock modulus under initial reservoir conditions. And its value is the ratio of stress to strain (from initial linear data) in the uniaxial strain experiment. Therefore, it is recommended to use the constrained modulus as the mechanical parameter in hydraulic fracture design. Meanwhile, there is a strong linear relationship between constrained modulus and Young's modulus, and the former is roughly six times greater than the latter. Combining the conversion between core and well log-derived Young's modulus, it is possible to estimate constrained modulus from well logs. The method proposed in this research was used for the fracturing design of well P45. After fracturing, the daily oil production increased from 399.3 to 2,462.9ft3, and the liquid production increased four times. Based on the studies of rock mechanics laboratory data, we propose that constrained modulus is the appropriate parameter in the hydraulic fracture design of soft sediments. And it has been verified by actual production data. The new method provides a reliable reference for the hydraulic fracture design of weakly consolidated sands in Bohai and other similar oilfields.
APA, Harvard, Vancouver, ISO, and other styles
7

Safarov, Asad, Vusal Iskandarov, and David Solomonov. "Application of Machine Learning Techniques for Rate of Penetration Prediction." In SPE Annual Caspian Technical Conference. SPE, 2022. http://dx.doi.org/10.2118/212088-ms.

Full text
Abstract:
Abstract In this paper, several supervised machine learning algorithms have been used to develop the model for rate of penetration prediction. To train the models, real-time drilling parameters and geological log data from 3 distinct wells in the South Caspian basin are used. The different machine learning techniques, such as linear and non-linear machine learning and deep artificial neural networks, trained the well data. The evaluation metric for training is Root Mean Square Error, however the performances of the regressions are evaluated on the data using R-squared for their comparison. Rate of penetration, or simply ROP, is the speed of the drill bit penetrating into the formation. Overall, it indicates at which rate the borehole deepens. Its value depends on the drilling parameters, such as weight on bit, applied torque, mud flow rate, rotation per minute and others. In addition, the mechanical strength of the rock formation also plays a great role, and well log data is used to assume this value for each point. That is why these features in the training datasets have high vulnerability. Comparing various techniques, Random Forest gives us the most optimal model in terms of accuracy and computational power. The average R-squared for Random Forest is 0.90. Although RNN and LSTM models can give nearly the same fit for given test data, it takes considerably much more time to train the models due to their complexity and show relatively lower accuracy on test data, therefore it is not a reasonable choice. Furthermore, another deep learning model is deployed to generate well logs for the following sections which supports optimizing ROP and drilling performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Mittal, Manish K., Robello Samuel, and Aldofo Gonzales. "Wear-Factor Prediction Based on Data-Driven Inversion Technique for Casing Wear Estimation." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-19035.

Full text
Abstract:
Abstract Wear factor is an important parameter for estimating casing wear, yet the industry lacks a sufficient data-driven wear-factor prediction model based on previous data. Inversion technique is a data-driven method for evaluating model parameters for a setting wherein the input and output values for the physical model/equation are known. For this case, the physical equation to calculate wear volume has wear factor, side force, RPM, tool-joint diameter, and time for a particular operation (i.e., rotating on bottom, rotating off bottom, sliding, back reaming, etc.) as inputs. Except for wear factor, these values are either available or can be calculated using another physical model (wear-volume output is available from the drilling log). Wear factor is considered the model parameter and is estimated using the inversion technique method. The preceding analysis was performed using soft-string and stiff-string models for side-force calculations and by considering linear and nonlinear wear-factor models. An iterative approach was necessary for the nonlinear wear-factor model because of its complexity. Log data provide the remaining thickness of the casing, which was converted into wear volume using standard geometric calculations. A paper [1] was presented in OMC 2019 discussing a method for bridging the gap. A study was conducted in this paper for a real well based on the new method, and successful results were discussed. The current paper extends that study to another real well casing wear prediction with this novel approach. Some methods discussed are already included in the mentioned paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Cohn, Marvin J., and Steve R. Paterson. "Evaluation of Historical Longitudinal Seam Weld Failures in Grades 11, 12, and 22 Materials." In ASME 2008 Pressure Vessels and Piping Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/pvp2008-61245.

Full text
Abstract:
Since the catastrophic HEP seam weld failures of Mohave (1985) and Monroe (1986), electric power utilities have become more interested in developing and implementing examination and fitness-for-service evaluations of their HEP systems. At least 30 failures or substantial cracks in Grade 11 (1-1/4Cr – 1/2Mo), Grade 12 (1Cr – 1/2Mo) and Grade 22 (2-1/4 Cr – 1Mo) pipe longitudinal seam welds or clamshell welds have occurred from 1979 through 2000. This paper provides a statistical analysis of well-characterized Grade 11, Grade 12, and Grade 22 longitudinal seam weld failures or substantial cracks developed by long term creep rupture damage. Considering several applicable hoop stress parameters, linear regression analyses were performed to minimize scatter about a log stress versus Larson Miller Parameter (LMP) curve fit. Each of six applicable hoop stress equations was evaluated to determine the best fit stress space for the longitudinal seam weld failure data. These service experience industry data include pipe thicknesses ranging from 0.5 to 4.5 inches and failure times ranging from 71,000 to 278,000 hours.
APA, Harvard, Vancouver, ISO, and other styles
10

Lassen, Tom, Jose L. Arana, Luis Canada, Jan Henriksen, and Nina K. Holthe. "Crack Growth in High Strength Chain Steel Subjected to Fatigue Loading in a Corrosive Environment." In ASME 2005 24th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2005. http://dx.doi.org/10.1115/omae2005-67242.

Full text
Abstract:
The present article presents the fatigue crack growth behavior of new high strength steels designated R4S grade. Eight Compact Tension (CT) specimens with 25 mm thickness were subjected to constant amplitude loading while exposed to seawater without and with cathodic protection. The Cathodic Potential (CP) was set to −890 mV and −1100 mV relative to an Ag/AgCl reference cell. Rates in air are included as a reference. The crack growth parameters were determined from a linear relation between da/dN and δK for a log-log scale. The derived figures are given in the table below. The figures for dry air and cathodic protection are valid for δK between 15 and 30 MPam0.5. Below this range the slope m of the linear relation will change and further investigations have to be carried out for this region. The figures for free corrosion are valid for δK values from 10 to 30 MPam0.5. The threshold value for δK is close to 5 MPam0.5 in this case. The measured growth rates were compared with the rates for medium strength carbon manganese steels found in rules and regulation, i.e. BS7910. The present growth rates are well within the scatter band given for these steels in air and free corrosion. The present growth rates found in seawater with cathodic protection are however substantially lower than the rates given in BS7910. When a cathodic potential of −1100 mV was applied, crack closure was observed at medium levels of δK. The explanation is the formation of calcareous deposit in the wake of the crack front that gives significantly reduced growth rates and finally leads to crack closure. This finding is a surprise for a high strength steel. The results are promising and should be investigated further. Finally, a linear elastic fracture mechanics model was established to study the fatigue behavior in a stud-less link. The model was used to construct S-N curves that are consistent with experimental fatigue lives and the design curve given in the DNV rules. The present growth parameters were used in conjuction with a crack-like initial flaw with depth in the range from 0.12 to 0.20 mm. The difference found between the growth rates in dry air and in free corrosion are in accordance with tested fatigue lives for these two environments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography