Journal articles on the topic 'Markov chains; interval analysis; sensitivity analysis'

To see the other types of publications on this topic, follow the link: Markov chains; interval analysis; sensitivity analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Markov chains; interval analysis; sensitivity analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Xingliang, Jinliang Xu, Menghui Li, and Jia Peng. "Sensitivity Analysis Based SVM Application on Automatic Incident Detection of Rural Road in China." Mathematical Problems in Engineering 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/9583285.

Full text
Abstract:
Traditional automatic incident detection methods such as artificial neural networks, backpropagation neural network, and Markov chains are not suitable for addressing the incident detection problem of rural roads in China which have a relatively high accident rate and a low reaction speed caused by the character of small traffic volume. This study applies the support vector machine (SVM) and parameter sensitivity analysis methods to build an accident detection algorithm in a rural road condition, based on real-time data collected in a field experiment. The sensitivity of four parameters (speed, front distance, vehicle group time interval, and free driving ratio) is analyzed, and the data sets of two parameters with a significant sensitivity are chosen to form the traffic state feature vector. The SVM and k-fold cross validation (K-CV) methods are used to build the accident detection algorithm, which shows an excellent performance in detection accuracy (98.15% of the training data set and 87.5% of the testing data set). Therefore, the problem of low incident reaction speed of rural roads in China could be solved to some extent.
APA, Harvard, Vancouver, ISO, and other styles
2

Elmesmari, Nasir, Farag Hamad, and Abdelbaset Abdalla. "Parameters Estimation Sensitivity of the Linear Mixed Model To Alternative Prior Distribution Specifications." Scholars Journal of Physics, Mathematics and Statistics 8, no. 9 (November 11, 2021): 166–70. http://dx.doi.org/10.36347/sjpms.2021.v08i09.001.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is the most widely used method for estimating joint posterior distributions in Bayesian analysis. The Markov chain Monte Carlo technique has been used in order to estimate the model parameters based on the different prior distributions. MCMC simulations were carried out in order to evaluate the linear mixed model using different parameters of the prior distribution. In this paper, we established the linear mixed model with different types of variables. The proposed parameters of the prior distribution are different from the traditional parameters of the prior distribution. We assumed special parameters for the prior distribution based on some background or information about the data science. This work aims to estimate the parameters using a point estimator or find a confidence interval (credible interval) for the unknown parameters. Also, a specific hypothesis about these parameters can be tested using a random sample from the posterior distribution. The performance of each prior is measured based on the effective sample size (ESS) for the estimated model. The results showed that the estimated linear mixed model with proposed parameters of the prior distribution performed very well in comparison with the standard or traditional prior (inverse-Wishart prior for random effect component). Based on the scale reduction factors, the estimated model with proposed parameters performed better in comparison with scale reduction factors for the traditional model.
APA, Harvard, Vancouver, ISO, and other styles
3

Weedon-Fekjær, Harald, Lars J. Vatten, Odd O. Aalen, Bo Lindqvist, and Steinar Tretli. "Estimating mean sojourn time and screening test sensitivity in breast cancer mammography screening: new results." Journal of Medical Screening 12, no. 4 (December 1, 2005): 172–78. http://dx.doi.org/10.1258/096914105775220732.

Full text
Abstract:
Objective: To assess if new screening techniques, increased use of hormone replacement therapy, or the transition from breast cancer screening trials to large scale screening programmes may influence the average time in preclinical screening detectable phase (mean sojourn time [MST]) or screening test sensitivity (STS). Setting: Screening and interval data for 395,188 women participating in the Norwegian Breast Cancer Screening Programme (NBCSP). Methods: Weighted non-linear least-square regression estimates using a tree step Markov chain model, and a sensitivity analysis of the possible impact by opportunistic screening between ordinary breast cancer screening rounds. Results: MST was estimated to 6.1 (95% confidence interval [CI] 5.1–7.0) years for women aged 50–59 years, and 7.9 (95% CI 6.0–7.9) years for those aged 60–69 years. Correspondingly, STS was estimated to 58% (95% CI 52–64 %) and 73 % (67–78 %), respectively. Simulations revealed that opportunistic screening may give a moderate estimation bias towards higher MST and lower STS. Assuming a probable 21% higher background incidence, due to increased hormone replacement therapy use, MST estimates decreased to 3.9 and 5.0 years for the two age groups, and STS increased to 75 and 85%. Conclusions: The new estimates indicate that screening detectable phase is longer than that found in previous mammography trials/programmes, but also that the sensitivity of the screening test is lower. Overall, the NBCSP detects more cancer cases than most previous trials/programmes.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Tarang, Peter Gøtzsche, and Oliver Kuss. "VP26 Comparing Statistical Methods For Meta-Analysis Of Rare Event Data." International Journal of Technology Assessment in Health Care 33, S1 (2017): 158–59. http://dx.doi.org/10.1017/s0266462317003166.

Full text
Abstract:
INTRODUCTION:We aimed to identify the validity and robustness of effect estimates for serious rare adverse events in clinical study reports of antidepressant trials, across different meta-analysis methods for rare binary events data (1,2).METHODS:Four serious rare adverse events (all-cause mortality, suicidality, aggressive behaviour and akathisia) were meta-analyzed using different methods (3). The Yusuf-Peto odds ratio (OR), which ignores studies with no events in the treatment arms, was compared with the alternative approaches of generalized linear mixed models (GLMM), conditional logistic regression, a Bayesian approach using Markov Chain Monte Carlo (MCMC) and a beta-binomial regression model.RESULTS:Though the estimates for the four outcomes did not change substantially across the different analysis methods, the Yusuf-Peto method underestimated the treatment harm and overstimated its precision, especially when the estimated odds ratio (OR) deviated greatly from 1. For example the OR for suicidality for children and adolescents was 2.39 (95 percent Confidence Interval, CI 1.32 to 4.33, using the Yusuf-Peto method), but increased to 2.64 (95 percent CI 1.33 to 5.26) using conditional logistic regression, to 2.69 (95 percent CI 1.19 to 6.09) using beta-binomial, to 2.73 (95 percent CI 1.37 to 5.42) using the GLMM and finally to 2.87 (95 percent CI 1.42 to 5.98) using the MCMC approach.CONCLUSIONS:The method used for meta-analysis of rare events data influences the estimates obtained and the exclusion of double zero-event studies can give misleading results. To ensure reduction of bias and erroneous inferences, sensitivity analyses should be performed using different methods and we recommend that the Yusuf-Peto approach should no longer be used. Other methods, in particular the beta-binomial method that was shown to be superior, should be considered instead.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Jie, Zhengyang Zhao, Qian Ma, Ming Liu, and Giuseppe Lacidogna. "Damage Diagnosis of Single-Layer Latticed Shell Based on Temperature-Induced Strain under Bayesian Framework." Sensors 22, no. 11 (June 2, 2022): 4251. http://dx.doi.org/10.3390/s22114251.

Full text
Abstract:
Under the framework of Bayesian theory, a probabilistic method for damage diagnosis of latticed shell structures based on temperature-induced strain is proposed. First, a new damage diagnosis index is proposed based on the correlation between temperature-induced strain and structural parameters. Then, Markov Chain Monte Carlo is adopted to analyze the newly proposed diagnosis index, based on which the frequency distribution histogram for the posterior probability of the diagnosis index is obtained. Finally, the confidence interval of the damage diagnosis is determined by the posterior distribution of the initial state (baseline condition). The damage probability of the unknown state is also calculated. The proposed method was validated by applying it to a latticed shell structure with finite element developed, where the rod damage and bearing failure were diagnosed based on importance analysis and temperature sensitivity analysis of the rod. The analysis results show that the proposed method can successfully consider uncertainties in the strain response monitoring process and effectively diagnose the failure of important rods in radial and annular directions, as well as horizontal (x- and y-direction) bearings of the latticed shell structure.
APA, Harvard, Vancouver, ISO, and other styles
6

ACHIM, Luminiţa-Georgiana, Elena MITOI, Valentin MOLDOVEANU, and Codrut-Ioan TURLEA. "Credit Scoring – General Approach in the IFRS 9 Context." Audit Financiar 19, no. 162 (May 20, 2021): 384–96. http://dx.doi.org/10.20869/auditf/2021/162/014.

Full text
Abstract:
With the coming into force of the standard IFRS 9 – Financial Instruments, in January 2018, financial institutions passed from an incurred loss model to a forward-looking model for the computation of impairment losses. As such, the IFRS 9 models use point-in-time, estimates of Probability of Default and Loss Given Default and provide a more faithful representation of the credit risk at a given as they are based on past experiences as well as the most recent and forecasted economic conditions. However, given the short-term fluctuations in the macroeconomic conditions, the final outcome of the Expected credit loss models is highly volatile due to their sensitivity to the business cycle. With regard to Probability of Default estimation under IFRS 9, the most commonly methods are: Markov Chains, Survival Analysis and single-factor models (Vasicek and Z-Shift). The development of the score-cards is still the same as in the case of the Internal Ratings Based Probability of Default models, encouraging institutions to use the already available credit rating systems and perform adjustment to the calibration. This paper outlines a non-exhaustive list of quantitative validation tests would satisfy the requirements of the IFRS 9 standard.
APA, Harvard, Vancouver, ISO, and other styles
7

Cassandras, C. G., and S. G. Strickland. "On-line sensitivity analysis of Markov chains." IEEE Transactions on Automatic Control 34, no. 1 (1989): 76–86. http://dx.doi.org/10.1109/9.8651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lasserre, J. B. "Exact formula for sensitivity analysis of Markov chains." Journal of Optimization Theory and Applications 71, no. 2 (November 1991): 407–13. http://dx.doi.org/10.1007/bf00939928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heidergott, Bernd, Haralambie Leahu, Andreas Löpker, and Georg Pflug. "Perturbation analysis of inhomogeneous finite Markov chains." Advances in Applied Probability 48, no. 1 (March 2016): 255–73. http://dx.doi.org/10.1017/apr.2015.16.

Full text
Abstract:
Abstract In this paper we provide a perturbation analysis of finite time-inhomogeneous Markov processes. We derive closed-form representations for the derivative of the transition probability at time t, with t > 0. Elaborating on this result, we derive simple gradient estimators for transient performance characteristics either taken at some fixed point in time t, or for the integrated performance over a time interval [0 , t]. Bounds for transient performance sensitivities are presented as well. Eventually, we identify a structural property of the derivative of the generator matrix of a Markov chain that leads to a significant simplification of the estimators.
APA, Harvard, Vancouver, ISO, and other styles
10

Rahimi, Ebrahim, Seyed Saeed Hashemi Nazari, Yaser Mokhayeri, Asaad Sharhani, and Rasool Mohammadi. "Nine-month Trend of Time-Varying Reproduction Numbers of COVID-19 in West of Iran." Journal of Research in Health Sciences 21, no. 2 (June 28, 2021): e00517-e00517. http://dx.doi.org/10.34172/jrhs.2021.54.

Full text
Abstract:
Background: The basic reproduction number (R0) is an important concept in infectious disease epidemiology and the most important parameter to determine the transmissibility of a pathogen. This study aimed to estimate the nine-month trend of time-varying R of COVID-19 epidemic using the serial interval (SI) and Markov Chain Monte Carlo in Lorestan, west of Iran. Study design: Descriptive study. Methods: This study was conducted based on a cross-sectional method. The SI distribution was extracted from data and log-normal, Weibull, and Gamma models were fitted. The estimation of time-varying R0, a likelihood-based model was applied, which uses pairs of cases to estimate relative likelihood. Results: In this study, Rt was estimated for SI 7-day and 14-day time-lapses from 27 February-14 November 2020. To check the robustness of the R0 estimations, sensitivity analysis was performed using different SI distributions to estimate the reproduction number in 7-day and 14-day time-lapses. The R0 ranged from 0.56 to 4.97 and 0.76 to 2.47 for 7-day and 14-day time-lapses. The doubling time was estimated to be 75.51 days (95% CI: 70.41, 81.41). Conclusions: Low R0 of COVID-19 in some periods in Lorestan, west of Iran, could be an indication of preventive interventions, namely quarantine and isolation. To control the spread of the disease, the reproduction number should be reduced by decreasing the transmission and contact rates and shortening the infectious period.
APA, Harvard, Vancouver, ISO, and other styles
11

Caswell, Hal. "Sensitivity analysis of discrete Markov chains via matrix calculus." Linear Algebra and its Applications 438, no. 4 (February 2013): 1727–45. http://dx.doi.org/10.1016/j.laa.2011.07.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dai, L. "Sensitivity analysis of stationary performance measures for Markov chains." Mathematical and Computer Modelling 23, no. 11-12 (June 1996): 143–60. http://dx.doi.org/10.1016/0895-7177(96)00069-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Ting, and Petr Plecháč. "Steady-State Sensitivity Analysis of Continuous Time Markov Chains." SIAM Journal on Numerical Analysis 57, no. 1 (January 2019): 192–217. http://dx.doi.org/10.1137/18m119402x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Solan, Eilon, and Nicolas Vieille. "Perturbed Markov chains." Journal of Applied Probability 40, no. 1 (March 2003): 107–22. http://dx.doi.org/10.1239/jap/1044476830.

Full text
Abstract:
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
APA, Harvard, Vancouver, ISO, and other styles
15

Solan, Eilon, and Nicolas Vieille. "Perturbed Markov chains." Journal of Applied Probability 40, no. 01 (March 2003): 107–22. http://dx.doi.org/10.1017/s0021900200022294.

Full text
Abstract:
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Guo Dong. "On the Sensitivity of the Solution of Nearly Uncoupled Markov Chains." SIAM Journal on Matrix Analysis and Applications 14, no. 4 (October 1993): 1112–23. http://dx.doi.org/10.1137/0614075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cacuci, Dan G., and Mihaela Ionescu-Bujor. "Adjoint Sensitivity Analysis of Dynamic Reliability Models Based on Markov Chains—I: Theory." Nuclear Science and Engineering 158, no. 2 (February 2008): 97–113. http://dx.doi.org/10.13182/nse08-a2742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chinyuchin, Yu M., and A. S. Solov'ev. "Application of Markov processes for analysis and control of aircraft maintainability." Civil Aviation High Technologies 23, no. 1 (February 26, 2020): 71–83. http://dx.doi.org/10.26467/2079-0619-2020-23-1-71-83.

Full text
Abstract:
The process of aircraft operation involves constant effects of various factors on its components leading to accidental or systematic changes in their technical condition. Markov processes are a particular case of stochastic processes, which take place during aeronautical equipment operation. The relationship of the reliability characteristics with the cost recovery of the objects allows us to apply the analytic apparatus of Markov processes for the analysis and optimization of maintainability factors. The article describes two methods of the analysis and control of object maintainability based on stationary and non-stationary Markov chains. The model of a stationary Markov chain is used for the equipment with constant in time intensity of the events. For the objects with time-varying events intensity, a non-stationary Markov chain is used. In order to reduce the number of the mathematical operations for the analysis of aeronautical engineering maintainability by using non-stationary Markov processes an algorithm for their optimization is presented. The suggested methods of the analysis by means of Markov chains allow to execute comparative assessments of expected maintenance and repair costs for one or several one-type objects taking into account their original conditions and operation time. The process of maintainability control using Markov chains includes search of the optimal strategy of maintenance and repair considering each state of an object under which maintenance costs will be minimal. The given approbation of the analysis methods and maintainability control using Markov processes for an object under control allowed to build a predictive-controlled model in which the expected costs for its maintenance and repair are calculated as well as the required number of spare parts for each specified operating time interval. The possibility of using the mathematical apparatus of Markov processes for a large number of objects with different reliability factors distribution is shown. The software implementation of the described methods as well as the usage of tabular adapted software will contribute to reducing the complexity of the calculations and improving data visualization.
APA, Harvard, Vancouver, ISO, and other styles
19

Singh, Salvinder, and Shahrum Abdullah. "Durability analysis using Markov chain modeling under random loading for automobile crankshaft." International Journal of Structural Integrity 10, no. 4 (August 12, 2019): 454–68. http://dx.doi.org/10.1108/ijsi-03-2018-0016.

Full text
Abstract:
Purpose The purpose of this paper is to present the durability analysis in predicting the reliability life cycle for an automobile crankshaft under random stress load using the stochastic process. Due to the limitations associated with the actual loading history obtained from the experimental analysis or due to the sensitivity of the strain gauge, the fatigue reliability life cycle assessment has lower accuracy and efficiency for fatigue life prediction. Design/methodology/approach The proposed Markov process embeds the actual maximum and minimum stresses by a continuous updating process for stress load history data. This is to reduce the large credible intervals and missing loading points used for fatigue life prediction. With the reduction and missing loading intervals, the accuracy of fatigue life prediction for the crankshaft was validated using the statistical correlation properties. Findings It was observed that fatigue reliability corresponded well by reporting the accuracy of 95–98 per cent with a mean squared error of 1.5–3 per cent for durability and mean cycle to failure. Hence, the proposed fatigue reliability assessment provides an accurate, efficient, fast and cost-effective durability analysis in contrast to costly and lengthy experimental techniques. Research limitations/implications An important implication of this study is durability-based life cycle assessment by developing the reliability and hazard rate index under random stress loading using the stochastic technique in modeling for improving the sensitivity of the strain gauge. Practical implications The durability analysis is one of the fundamental attributes for the safe operation of any component, especially in the automotive industry. Focusing on safety, structural health monitoring aims at the quantification of the probability of failure under mixed mode loading. In practice, diverse types of protective barriers are placed as safeguards from the hazard posed by the system operation. Social implications Durability analysis has the ability to deal with the longevity and dependability of parts, products and systems in any industry. More poignantly, it is about controlling risk whereby engineering incorporates a wide variety of analytical techniques designed to help engineers understand the failure modes and patterns of these parts, products and systems. This would enable the automotive industry to improve design and increase the life cycle with the durability assessment field focussing on product reliability and sustainability assurance. Originality/value The accuracy of the simulated fatigue life was statistically correlated with a 95 per cent boundary condition towards the actual fatigue through the validation process using finite element analysis. Furthermore, the embedded Markov process has high accuracy in generating synthetic load history for the fatigue life cycle assessment. More importantly, the fatigue reliability life cycle assessment can be performed with high accuracy and efficiency in assessing the integrity of the component regarding structural integrity.
APA, Harvard, Vancouver, ISO, and other styles
20

DANS, SILVANA LAURA, MARIANA DEGRATI, SUSANA NOEMÍ PEDRAZA, and ENRIQUE ALBERTO CRESPO. "Effects of Tour Boats on Dolphin Activity Examined with Sensitivity Analysis of Markov Chains." Conservation Biology 26, no. 4 (May 24, 2012): 708–16. http://dx.doi.org/10.1111/j.1523-1739.2012.01844.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

CHENG, Yepeng, Hiroyuki OKAMURA, and Tadashi DOHI. "A Comprehensive Performance Evaluation on Iterative Algorithms for Sensitivity Analysis of Continuous-Time Markov Chains." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E103.A, no. 11 (November 1, 2020): 1252–59. http://dx.doi.org/10.1587/transfun.2019eap1171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

de Cooman, Gert, Filip Hermans, and Erik Quaeghebeur. "IMPRECISE MARKOV CHAINS AND THEIR LIMIT BEHAVIOR." Probability in the Engineering and Informational Sciences 23, no. 4 (August 4, 2009): 597–635. http://dx.doi.org/10.1017/s0269964809990039.

Full text
Abstract:
When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis. This can be done by considering as basic uncertainty models the so-calledcredal setsthat these probabilities are known or believed to belong to and by allowing the probabilities to vary over such sets. This leads to the definition of animprecise Markov chain. We show that the time evolution of such a system can be studied very efficiently using so-calledlowerandupper expectations, which are equivalent mathematical representations of credal sets. We also study how the inferred credal set about the state at timenevolves asn→∞: under quite unrestrictive conditions, it converges to a uniquely invariant credal set, regardless of the credal set given for the initial state. This leads to a non-trivial generalization of the classical Perron–Frobenius theorem to imprecise Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
23

Moraes, Fernanda F., Virgílio José M. Ferreira Filho, Carlos Eduardo Durange de C. Infante, Luan Santos, and Edilson F. Arruda. "A Markov Chain Approach to Multicriteria Decision Analysis with an Application to Offshore Decommissioning." Sustainability 14, no. 19 (September 23, 2022): 12019. http://dx.doi.org/10.3390/su141912019.

Full text
Abstract:
This paper proposes a novel approach that makes use of continuous-time Markov chains and regret functions to find an appropriate compromise in the context of multicriteria decision analysis (MCDA). This method was an innovation in the relationship between uncertainty and decision parameters, and it allows for a much more robust sensitivity analysis. The proposed approach avoids the drawbacks of arbitrary user-defined and method-specific parameters by defining transition rates that depend only upon the performances of the alternatives. This results in a flexible and easy-to-use tool that is completely transparent, reproducible, and easy to interpret. Furthermore, because it is based on Markov chains, the model allows for a seamless and innovative treatment of uncertainty. We apply the approach to an oil and gas decommissioning problem, which seeks a responsible manner in which to dismantle and deactivate production facilities. The experiments, which make use of published data on the decommissioning of the field of Brent, account for 12 criteria and illustrate the application of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
24

Sibagatullin, S. K., A. S. Kharchenko, and L. D. Devyatchenko. "APPLICATION OF MARKOV CHAINS TO THE ANALYSIS OF BLAST FURNACE OPERATION EFFICIENCY." Izvestiya Visshikh Uchebnykh Zavedenii. Chernaya Metallurgiya = Izvestiya. Ferrous Metallurgy 61, no. 8 (October 24, 2018): 649–56. http://dx.doi.org/10.17073/0368-0797-2018-8-649-656.

Full text
Abstract:
The article presents the results of modeling in a dynamic format of one of the most important parameters of any research object – the efficiency of its work. As the object of investigation, a blast furnace with a volume of 2014 m3 was chosen. The main parameters of the efficiency of this object are traditionally used daily productivity and specific consumption of coke; these two parameters were generalized in this paper. In this case, various algebraic signs of the influence of these parameters were taken into account in the generalized efficiency index. Taking into account the variation of each of these parameters at 3 levels, the number of levels of the generalized efficiency index was determined as 32 = 9, therefore it was rational to take a 9-point scale with the measuring scale of profitability from the efficient operation of the blast furnace. The two-dimensional array of primary data of the volume N = 177 was transformed into a 9×9 transitional matrix for processing of random transitions of the efficiency index from one state to another by the Markov chain method with discrete states and time. The set of parameters of the random process is calculated: for the long-term forecast – the stationary vector of state probabilities, the average time of recurrence (reversal) for each efficiency state, the evaluation of the blast furnace efficiency in points; for a short-term forecast – the first time of transition from each state to any other state, the step number for a “burst” of probability for each reliable state at the initial moment of time, and the components of the efficiency index are obtained. It was established that the average level of the analyzed efficiency of the blast furnace (daily output 3702 tons and specific coke consumption 470 kg/ton) is achieved mainly due to short-term transitions of low-efficiency states to high-efficiency states and vice versa. The transfer of the system to more efficient and prolonged conditions is possible, and as practice has shown on the same blast furnace after repair works to eliminate the distortion of the furnace profile, the daily productivity has increased to 5048 tons with a specific coke consumption of 445 kg/t, but the structure of the transition matrix and the calculated indicators of the Markov chain have fundamentally changed in the direction of increasing the probabilities of stay and transitions of the system in more efficient states. The use of the Markov chain method with discrete states and time makes it possible to estimate the probable value of the change in the parameters of the operation of a blast furnace in a given time interval with constant levels of parameters characterizing the conditions of its operation.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Changxiao, Peng Wang, and Fang Yan. "Reliability Analysis of the Reconfigurable Integrated Modular Avionics Using the Continuous-Time Markov Chains." International Journal of Aerospace Engineering 2018 (September 18, 2018): 1–8. http://dx.doi.org/10.1155/2018/5213249.

Full text
Abstract:
The integrated modular avionics (IMA) has been widely deployed on the new designed aircraft to replace the traditional federated avionics. Hosted in different partitions which are isolated by the virtual boundaries, different functions are able to share the common resources in the IMA system. The IMA system can dynamically reconfigure the common resources to perform the hosted functions when some modules fail, which makes the system more robust. Meanwhile, the reliability of the reconfigurable integrated modular avionics becomes more complicated. In this paper, we firstly model the IMA as a joint (m,k)-failure tolerant system with the consideration of its reconfigurable capability. Secondly, the continuous-time Markov chains are introduced to analyze the reliability of the IMA system. Thirdly, we take the comprehensive display function hosted in the IMA system as an example to show the practical use of the proposed reliability analysis model. Through the parameter sensitivity analysis, different failure rate λ and priority order of different modules are chosen to analyze their impact on system reliability, which can provide guidance to improve the reliability of the IMA system during a dynamic reconstruction process and optimize resource allocation.
APA, Harvard, Vancouver, ISO, and other styles
26

Cacuci, Dan G., Iulian Balan, and Mihaela Ionescu-Bujor. "Adjoint Sensitivity Analysis of Dynamic Reliability Models Based on Markov Chains—II: Application to IFMIF Reliability Assessment." Nuclear Science and Engineering 158, no. 2 (February 2008): 114–53. http://dx.doi.org/10.13182/nse08-a2743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gnazalé, Gnahoua Guy Roger, Adonis Krou Damien Kouamé, and Valère-Carin Jofack Sokeng. "Prospective Mapping of Schistosomian Risk by Markovian Modelling and Multi-criteria Analysis in Central Côte d'Ivoire." European Journal of Engineering and Technology Research 6, no. 7 (December 20, 2021): 127–32. http://dx.doi.org/10.24018/ejeng.2021.6.7.2637.

Full text
Abstract:
The Bélier region and the autonomous district of Yamoussoukro, is a region of central Côte d'Ivoire that records every year cases of schistosomiasis contamination. Although the figures are low, this area is of interest for epidemiological control. The schistosomiasis infection with schistosoma haematobium or urinary bilharziasis is the most widespread and is important in some areas along the main rivers of the region. The development of maps of areas at risk schistosomiasis by 2027 by Markov modeling using Markov chains observable and by combining layers of sensitivity and vulnerability of 2027 of the infection show a change in the surface risk of contamination from 17% in 2017 to 23% in 2027 of the total area of the region. These areas are mainly located in the departments of Yamoussoukro, Toumodi and Djékanou. 15% of the localities in this region are high-risk areas in 2017 and 23% in 2027. The prediction of risk areas and localities at high risk of contamination by Markov modeling makes any preventive control strategy possible.
APA, Harvard, Vancouver, ISO, and other styles
28

Gnazalé, Gnahoua Guy Roger, Adonis Krou Damien Kouamé, and Valère-Carin Jofack Sokeng. "Prospective Mapping of Schistosomian Risk by Markovian Modelling and Multi-criteria Analysis in Central Côte d'Ivoire." European Journal of Engineering and Technology Research 6, no. 7 (December 20, 2021): 127–32. http://dx.doi.org/10.24018/ej-eng.2021.6.7.2637.

Full text
Abstract:
The Bélier region and the autonomous district of Yamoussoukro, is a region of central Côte d'Ivoire that records every year cases of schistosomiasis contamination. Although the figures are low, this area is of interest for epidemiological control. The schistosomiasis infection with schistosoma haematobium or urinary bilharziasis is the most widespread and is important in some areas along the main rivers of the region. The development of maps of areas at risk schistosomiasis by 2027 by Markov modeling using Markov chains observable and by combining layers of sensitivity and vulnerability of 2027 of the infection show a change in the surface risk of contamination from 17% in 2017 to 23% in 2027 of the total area of the region. These areas are mainly located in the departments of Yamoussoukro, Toumodi and Djékanou. 15% of the localities in this region are high-risk areas in 2017 and 23% in 2027. The prediction of risk areas and localities at high risk of contamination by Markov modeling makes any preventive control strategy possible.
APA, Harvard, Vancouver, ISO, and other styles
29

Gu, Zheng, Yue Liu, Aijun Yang, and Kaodui Li. "New Method of Sensitivity Computation Based on Markov Models with Its Application for Risk Management." Journal of Mathematics 2022 (May 9, 2022): 1–13. http://dx.doi.org/10.1155/2022/9510466.

Full text
Abstract:
Sensitivity analysis is at the core of risk management for financial engineering; to calculate the sensitivity with respect to parameters in models with probability expectation, the most traditional approach applies the finite difference method, whereafter integration by parts formula was developed based on the Brownian environment and applied in sensitivity analysis for better computational efficiency than that of finite difference. Establishing a similar version of integration by parts formula for the Markovian environment is the main focus and contribution of this paper. It is also shown by numerical simulation that our proposed methodology and approach outperform the traditional finite difference method for sensitivity computation. For empirical studies of sensitivity analysis on an NPV (net present value) model, we show the approaches of modeling, especially for parameter estimation of Markov chains given data of company loan states. Applying our newly established integration by parts formula, numerical simulation estimates the variations caused by the capital return rate and multiplier of overdue loan. Furthermore, managemental implications of these results are discussed for the effectiveness of modeling and the investment risk control.
APA, Harvard, Vancouver, ISO, and other styles
30

Boumi, Shahab, and Adan Ernesto Vela. "Improving Graduation Rate Estimates Using Regularly Updating Multi-Level Absorbing Markov Chains." Education Sciences 10, no. 12 (December 13, 2020): 377. http://dx.doi.org/10.3390/educsci10120377.

Full text
Abstract:
American universities use a procedure based on a rolling six-year graduation rate to calculate statistics regarding their students’ final educational outcomes (graduating or not graduating). As an alternative to the six-year graduation rate method, many studies have applied absorbing Markov chains for estimating graduation rates. In both cases, a frequentist approach is used. For the standard six-year graduation rate method, the frequentist approach corresponds to counting the number of students who finished their program within six years and dividing by the number of students who entered that year. In the case of absorbing Markov chains, the frequentist approach is used to compute the underlying transition matrix, which is then used to estimate the graduation rate. In this paper, we apply a sensitivity analysis to compare the performance of the standard six-year graduation rate method with that of absorbing Markov chains. Through the analysis, we highlight significant limitations with regards to the estimation accuracy of both approaches when applied to small sample sizes or cohorts at a university. Additionally, we note that the Absorbing Markov chain method introduces a significant bias, which leads to an underestimation of the true graduation rate. To overcome both these challenges, we propose and evaluate the use of a regularly updating multi-level absorbing Markov chain (RUML-AMC) in which the transition matrix is updated year to year. We empirically demonstrate that the proposed RUML-AMC approach nearly eliminates estimation bias while reducing the estimation variation by more than 40%, especially for populations with small sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Xiao, Deyi Xu, Na Qu, Tianqi Liu, Fang Qu, and Guowei Zhang. "Predictive Maintenance and Sensitivity Analysis for Equipment with Multiple Quality States." Mathematical Problems in Engineering 2021 (May 10, 2021): 1–10. http://dx.doi.org/10.1155/2021/4914372.

Full text
Abstract:
This paper discusses the predictive maintenance (PM) problem of a single equipment system. It is assumed that the equipment has deteriorating quality states as it operates, resulting in multiple yield levels represented as system observation states. We cast the equipment deterioration as discrete-state and continuous-time semi-Markov decision process (SMDP) model and solve the SMDP problem in reinforcement learning (RL) framework using the strategy-based method. In doing so, the goal is to maximize the system average reward rate (SARR) and generate the optimal maintenance strategy for given observation states. Further, the PM time is capable of being produced by a simulation method. In order to prove the advantage of our proposed method, we introduce the standard sequential preventive maintenance algorithm with unequal time interval. Our proposed method is compared with the sequential preventive maintenance algorithm in a test objective of SARR, and the results tell us that our proposed method can outperform the sequential preventive maintenance algorithm. In the end, the sensitivity analysis of some parameters on the PM time is given.
APA, Harvard, Vancouver, ISO, and other styles
32

Polimenakos, L. C. "Shallow seismicity in the area of Greece: its character as seen by means of a stochastic model." Nonlinear Processes in Geophysics 2, no. 3/4 (December 31, 1995): 136–46. http://dx.doi.org/10.5194/npg-2-136-1995.

Full text
Abstract:
Abstract. Occurrence of successive earthquake events in space is analysed by means of semi-stochastic processes. The analysis employs earthquakes events with M > 5.2 from the area of Greece and its surroundings (18-31° E, 34-43° N) for the time interval 1911-1985. The sequence of earthquake occurrences can be only marginally described by a first order Markov chain model. Substitutability analysis incorporates the results of Markov Chains, revealing, though, detailed interrelations of parts (subareas) of the study area, not appreciated in Markov Chain analysis. Reactivation of particular subareas provides an insight into the level of interaction between neighbouring seismogenic sources within a subarea. The earthquake occurrence pattern provides evidence for the effect of a significant stress diffusion through time in the sense of a stress front. Taking into account the limitations of the methodologies applied, results indicate the importance of large-scale monitoring of seismicity, which assist in the identification of particular characteristics of the earthquake occurrence in space and time.
APA, Harvard, Vancouver, ISO, and other styles
33

Pepiciello, Antonio, Alfredo Vaccaro, and Loi Lei Lai. "An Interval Mathematic-Based Methodology for Reliable Resilience Analysis of Power Systems in the Presence of Data Uncertainties." Energies 13, no. 24 (December 16, 2020): 6632. http://dx.doi.org/10.3390/en13246632.

Full text
Abstract:
Prevention and mitigation of low probability, high impact events is becoming a priority for power system operators, as natural disasters are hitting critical infrastructures with increased frequency all over the world. Protecting power networks against these events means improving their resilience in planning, operation and restoration phases. This paper introduces a framework based on time-varying interval Markov Chains to assess system’s resilience to catastrophic events. After recognizing the difficulties in accurately defining transition probabilities, due to the presence of data uncertainty, this paper proposes a novel approach based on interval mathematics, which allows representing the elements of the transition matrices by intervals, and computing reliable enclosures of the transient state probabilities. The proposed framework is validated on a case study, which is based on the resilience analysis of a power system in the presence of multiple contemporary faults. The results show how the proposed framework can successfully enclose all the possible outcomes obtained through Monte Carlo simulation. The main advantages are the low computational burden and high scalability achieved.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Ningyuan, Wei-Chau Xie, and Ralph Haas. "Reliability-Based Processing of Markov Chains for Modeling Pavement Network Deterioration." Transportation Research Record: Journal of the Transportation Research Board 1524, no. 1 (January 1996): 203–13. http://dx.doi.org/10.1177/0361198196152400124.

Full text
Abstract:
Accurate prediction of pavement deterioration is the most important factor in the determination of pavement repair years and optimization programming of highway network maintenance. The Nonhomogeneous Markov Probabilistic Modeling Program, developed to determine pavement deterioration rates in different stages, is described. In this program the transition probability matrices (TPMs) are considered as a time-related transition process. Each element of the TPMs is determined on the basis of a reliability analysis and a Monte Carlo simulation technique. This avoids the use of the existing conventional methods, which involve taking an average subjective opinion of pavement engineers or observing a large number of multiyear pavement performance data and conducting a number of statistical calculations. As a result a series of TPMs for an individual pavement section for different stages can be determined by running the program. Furthermore, the pavement condition state in terms of a probability vector at each stage (year) is calculated. In applying the models both the predicted actual traffic (in terms of equivalent single axle loads) at each stage and the maximum traffic that the pavement can withstand at each defined pavement condition state interval are considered to be random variables. In addition, the sensitivities of pavement deterioration rates to pavement design parameters, such as traffic growth rate, subgrade strength, and material properties, are studied. Finally, an example of calculating the TPMs for a pavement section located in southeastern Ontario, Canada, is demonstrated. It shows that the sensitivities of the TPMs to traffic growth rate, subgrade deflection, and pavement thickness are significant.
APA, Harvard, Vancouver, ISO, and other styles
35

Feng, Xingxing, Haihua Sun, Tianqi Lv, and Yunqing Zhang. "Kinematic analysis of a PPPR spatial serial mechanism with geometric errors." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 234, no. 1 (November 4, 2018): 225–40. http://dx.doi.org/10.1177/0954406218809124.

Full text
Abstract:
The present study focuses on the kinematic analysis of a PPPR spatial serial mechanism with a large number of geometric errors. The study is implemented in three steps: (1) development of a map between the end-effector position error and geometric source errors within the serial mechanism kinematic chains using homogeneous transformation matrix; (2) selection of geometric errors which have significant effects on end-effector positioning accuracy by sensitivity analysis; (3) kinematic analysis of the serial mechanism within which the geometric errors are modelled as interval variables. The computational algorithms are presented for positioning accuracy analysis and workspace analysis in consideration of geometric errors. The analysis results show that the key factors which have significant effects on end-effector position error can be identified efficiently, and the uncertain workspace can also be calculated efficiently.
APA, Harvard, Vancouver, ISO, and other styles
36

Karatzoglou, Antonios, Dominik Köhler, and Michael Beigl. "Semantic-Enhanced Multi-Dimensional Markov Chains on Semantic Trajectories for Predicting Future Locations." Sensors 18, no. 10 (October 22, 2018): 3582. http://dx.doi.org/10.3390/s18103582.

Full text
Abstract:
In this work, we investigate the performance of Markov Chains with respect to modelling semantic trajectories and predicting future locations. In the first part, we examine whether and to what degree the semantic level of semantic trajectories affects the predictive performance of a spatial Markov model. It can be shown that the choice of the semantic level when describing trajectories has a significant impact on the accuracy of the models. High-level descriptions lead to better results than low-level ones. The second part introduces a multi-dimensional Markov Chain construct that considers, besides locations, additional context information, such as time, day and the users’ activity. While the respective approach is able to outperform our baseline, we could also identify some limitations. These are mainly attributed to its sensitivity towards small-sized training datasets. We attempt to overcome this issue, among others, by adding a semantic similarity analysis component to our model that takes the varying role of locations due each time to the respective purpose of visiting the particular location explicitly into consideration. To capture the aforementioned dynamics, we define an entity, which we refer to as Purpose-of-Visit-Dependent Frame (PoVDF). In the third part of this work, we describe in detail the PoVDF-based approach and we evaluate it against the multi-dimensional Markov Chain model as well as with a semantic trajectory mining and prefix tree based model. Our evaluation shows that the PoVDF-based approach outperforms its competition and lays a solid foundation for further investigation.
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Hanqing, Chunfeng Ma, Xin Li, Wenping Yuan, Zhengjia Liu, and Gaofeng Zhu. "Sensitivity and Uncertainty Analyses of Flux-based Ecosystem Model towards Improvement of Forest GPP Simulation." Sustainability 12, no. 7 (March 25, 2020): 2584. http://dx.doi.org/10.3390/su12072584.

Full text
Abstract:
An ecosystem model serves as an important tool to understand the carbon cycle in the forest ecosystem. However, the sensitivities of parameters and uncertainties of the model outputs are not clearly understood. Parameter sensitivity analysis (SA) and uncertainty analysis (UA) play a crucial role in the improvement of forest gross primary productivity GPP simulation. This study presents a global SA based on an extended Fourier amplitude sensitivity test (EFAST) method to quantify the sensitivities of 16 parameters in the Flux-based ecosystem model (FBEM). To systematically evaluate the parameters’ sensitivities, various parameter ranges, different model outputs, temporal variations of parameters sensitivity index (SI) were comprehensively explored via three experiments. Based on the numerical experiments of SA, the UA experiments were designed and performed for parameter estimation based on a Markov chain Monte Carlo (MCMC) method. The ratio of internal CO2 to air CO2 ( f C i ) , canopy quantum efficiency of photon conversion ( α q ) , maximum carboxylation rate at 25 ° C ( V m 25 ) were the most sensitive parameters for the GPP. It was also indicated that α q , E V m and Q 10 were influenced by temperature throughout the entire growth stage. The result of parameter estimation of only using four sensitive parameters (RMSE = 1.657) is very close to that using all the parameters (RMSE = 1.496). The results of SA suggest that sensitive parameters, such as f c i , α q , E V m , V m 25 strongly influence on the forest GPP simulation, and the temporal characteristics of the parameters’ SI on GPP and NEE were changed in different growth. The sensitive parameters were a major source of uncertainty and parameter estimation based on the parameter SA could lead to desirable results without introducing too great uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Michelle L. Depoy, and William S. Griffith. "Scan Start-Up Demonstration Test." International Journal of Reliability, Quality and Safety Engineering 22, no. 03 (June 2015): 1550014. http://dx.doi.org/10.1142/s021853931550014x.

Full text
Abstract:
The CSTF and TSTF binary start-up demonstration tests have been studied in the literature. It has been shown that both tests perform optimal for various situations but neither one is an overall best test. Therefore, we propose a new demonstration test that is a compromise between the CSTF and TSTF start-up demonstration tests. The practitioner would accept the equipment if a cluster of successes occur prior to a preset number of failures, and rejection otherwise. Markov chains are used in the probabilistic analysis and this is extended to the non-i.i.d. case. Point and interval estimation is studied.
APA, Harvard, Vancouver, ISO, and other styles
39

EL KHARBOUTLY, REHAB A., SWAPNA S. GOKHALE, and REDA A. AMMAR. "ARCHITECTURE-BASED SOFTWARE RELIABILITY ANALYSIS INCORPORATING CONCURRENCY." International Journal of Reliability, Quality and Safety Engineering 14, no. 05 (October 2007): 479–99. http://dx.doi.org/10.1142/s0218539307002751.

Full text
Abstract:
With the growing complexity of software applications and increasing reliance on the services provided by these applications, architecture-based reliability analysis has become the focus of several recent research efforts. Most of the prevalent research in this area does not consider simultaneous or concurrent execution of application components. Concurrency, however, may be common in modern software applications. Thus, reliability analysis considering concurrent component execution within the context of the application architecture is necessary for contemporary software applications. This paper presents an architecture-based reliability analysis methodology for concurrent software applications. Central to the methodology is a state space approach, based on discrete time Markov chains (DTMCs), to represent the application architecture taking into consideration simultaneous component execution. A closed form, analytical expression for the expected application reliability based on the average execution times, constant failure rates, and the average number of visits to the components is derived. The average number of visits to application components are obtained from the solution of the DTMC model representing the application architecture. The potential of the methodology to facilitate sensitivity analysis, identification of reliability bottlenecks, and an assessment of the impact of workload and component changes, in addition to providing a reliability estimate, is discussed. To enable the application of the methodology in practice, estimation of model parameters from different software artifacts is described. The methodology is illustrated with a case study. Finally, strategies to alleviate the state space explosion issue for an efficient application of the methodology are proposed.
APA, Harvard, Vancouver, ISO, and other styles
40

Colebank, Mitchel J., and Naomi C. Chesler. "An in-silico analysis of experimental designs to study ventricular function: A focus on the right ventricle." PLOS Computational Biology 18, no. 9 (September 20, 2022): e1010017. http://dx.doi.org/10.1371/journal.pcbi.1010017.

Full text
Abstract:
In-vivo studies of pulmonary vascular disease and pulmonary hypertension (PH) have provided key insight into the progression of right ventricular (RV) dysfunction. Additional in-silico experiments using multiscale computational models have provided further details into biventricular mechanics and hemodynamic function in the presence of PH, yet few have assessed whether model parameters are practically identifiable prior to data collection. Moreover, none have used modeling to devise synergistic experimental designs. To address this knowledge gap, we conduct a practical identifiability analysis of a multiscale cardiovascular model across four simulated experimental designs. We determine a set of parameters using a combination of Morris screening and local sensitivity analysis, and test for practical identifiability using profile likelihood-based confidence intervals. We employ Markov chain Monte Carlo (MCMC) techniques to quantify parameter and model forecast uncertainty in the presence of noise corrupted data. Our results show that model calibration to only RV pressure suffers from practical identifiability issues and suffers from large forecast uncertainty in output space. In contrast, parameter and model forecast uncertainty is substantially reduced once additional left ventricular (LV) pressure and volume data is included. A comparison between single point systolic and diastolic LV data and continuous, time-dependent LV pressure-volume data reveals that any information from the LV substantially reduces parameter and forecast uncertainty, encouraging at least some quantitative data from both ventricles for future experimental studies.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Chaochen, Hiroshi Yatsuya, Yingsong Lin, Tae Sasakabe, Sayo Kawai, Shogo Kikuchi, Hiroyasu Iso, and Akiko Tamakoshi. "Milk Intake and Stroke Mortality in the Japan Collaborative Cohort Study—A Bayesian Survival Analysis." Nutrients 12, no. 9 (September 9, 2020): 2743. http://dx.doi.org/10.3390/nu12092743.

Full text
Abstract:
The aim of this study was to further examine the relationship between milk intake and stroke mortality among the Japanese population. We used data from the Japan Collaborative Cohort (JACC) Study (total number of participants = 110,585, age range: 40–79) to estimate the posterior acceleration factors (AF) as well as the hazard ratios (HR) comparing individuals with different milk intake frequencies against those who never consumed milk at the study baseline. These estimations were computed through a series of Bayesian survival models that employed a Markov Chain Monte Carlo simulation process. In total, 100,000 posterior samples were generated separately through four independent chains after model convergency was confirmed. Posterior probabilites that daily milk consumers had lower hazard or delayed mortality from strokes compared to non-consumers was 99.0% and 78.0% for men and women, respectively. Accordingly, the estimated posterior means of AF and HR for daily milk consumers were 0.88 (95% Credible Interval, CrI: 0.81, 0.96) and 0.80 (95% CrI: 0.69, 0.93) for men and 0.97 (95% CrI: 0.88, 1.10) and 0.95 (95% CrI: 0.80, 1.17) for women. In conclusion, data from the JACC study provided strong evidence that daily milk intake among Japanese men was associated with delayed and lower risk of mortality from stroke especially cerebral infarction.
APA, Harvard, Vancouver, ISO, and other styles
42

Miranda, Darién, Bert Arnrich, and Jesús Favela. "Detecting Anxiety States when Caring for People with Dementia." Methods of Information in Medicine 56, no. 01 (2017): 55–62. http://dx.doi.org/10.3414/me15-02-0012.

Full text
Abstract:
SummaryBackground: Caregiving is a complex, stressful activity, which frequently leads to anxiety and the development of depressive disorders. Recent advances in wearable sensing allows to monitor relevant physiological data of the caregiver for detecting anxiety spans and for enacting coping strategies to reduce their anxiety when needed.Objectives: This work proposes a method to infer anxiety states of caregivers when caring for people with dementia, by using physiological data.Methods: A model using Markov chains for detecting internal anxiety states is proposed. The model is tested with a physiological dataset gathered from a naturalistic enactment experiment with 10 participants. A visual analysis for observing anxiety states is employed. The Markov chain model is evaluated by using Inter-beat Interval (IBI) data to detect 4 internal states: “Relaxed”, “Arousing”, “Anxiety”, and “Relaxing”.Results: From the visual inspection of interbeat interval data, self-report and observation labels a total of 823 state segments were identified which contained the following states: 137 “relaxed”, 91 “arousing”, 410 “anxious”, and 185 “relaxing”. By using the average IBI value of 60 seconds segments as classification feature, the model was evaluated with a “leave one-out” cross validation with an average accuracy of 73.03%.Conclusions: We proposed a Markov chain model for internal anxiety state detection of caregivers that care for people with dementia. The model was evaluated in a naturalistic enactment experiment with 10 participants. The resulting accuracy is comparable to previous results on stress classification.
APA, Harvard, Vancouver, ISO, and other styles
43

Sedghizadeh, Mohammadamin, and Robert Shcherbakov. "The Analysis of the Aftershock Sequences of the Recent Mainshocks in Alaska." Applied Sciences 12, no. 4 (February 10, 2022): 1809. http://dx.doi.org/10.3390/app12041809.

Full text
Abstract:
The forecasting of the evolution of natural hazards is an important and critical problem in natural sciences and engineering. Earthquake forecasting is one such example and is a difficult task due to the complexity of the occurrence of earthquakes. Since earthquake forecasting is typically based on the seismic history of a given region, the analysis of the past seismicity plays a critical role in modern statistical seismology. In this respect, the recent three significant mainshocks that occurred in Alaska (the 2002, Mw 7.9 Denali; the 2018, Mw 7.9 Kodiak; and the 2018, Mw 7.1 Anchorage earthquakes) presented an opportunity to analyze these sequences in detail. This included the modelling of the frequency-magnitude statistics of the corresponding aftershock sequences. In addition, the aftershock occurrence rates were modelled using the Omori–Utsu (OU) law and the Epidemic Type Aftershock Sequence (ETAS) model. For each sequence, the calculation of the probability to have the largest expected aftershock during a given forecasting time interval was performed using both the extreme value theory and the Bayesian predictive framework. For the Bayesian approach, the Markov Chain Monte Carlo (MCMC) sampling of the posterior distribution was performed to generate the chains of the model parameters. These MCMC chains were used to simulate the models forward in time to compute the predictive distributions. The calculation of the probabilities to have the largest expected aftershock to be above a certain magnitude after a mainshock using the Bayesian predictive framework fully takes into account the uncertainties of the model parameters. Moreover, in order to investigate the credibility of the obtained forecasts, several statistical tests were conducted to compare the performance of the earthquake rate models based on the OU formula and the ETAS model. The results indicate that the Bayesian approach combined with the ETAS model produced more robust results than the standard approach based on the extreme value distribution and the OU law.
APA, Harvard, Vancouver, ISO, and other styles
44

Iannazzo, Sergio. "The health-economic models: practical aspects and management of uncertainty." Farmeconomia. Health economics and therapeutic pathways 7, no. 4 (January 15, 2006): 239–45. http://dx.doi.org/10.7175/fe.v7i4.258.

Full text
Abstract:
Analytic models are a powerful instrument to develop pharmacoeconomic analyses and their importance is growing as they are being increasingly used to make predictions of the consequences of a particular intervention. It is possible to group the most commonly used techniques in three families: decision trees, Markov chains and probabilistic simulation models. Only the last ones take into account a wide range of uncertainties and have the capability to make probabilistic predictions. Discrete-state, discrete-time Markov models are the most used technique, but have some limits due to their structural rigidity that can make appropriate representation of clinical reality difficult. First-order simulation of Markov models produces deterministic results and can be conveniently implemented in a matrix algebra formal framework. In order to take decision based on models prediction deterministic results are not sufficient and it is widely recognized the need to handle uncertainty in its various forms. The task could be accomplished with traditional (deterministic) and/or probabilistic sensitivity analysis. Both analyses provide complementary information on how the parameters and assumptions uncertainty spreads trough the model and are recommended by ISPOR (International Society for Pharmacoeconomics and Outcomes Research) modelling guidelines.
APA, Harvard, Vancouver, ISO, and other styles
45

Fang, C. Y., and Yao Ting Zhang. "Model Updating of Full Prestressed Concrete Beam Based on Bayesian Theory." Advanced Materials Research 304 (July 2011): 107–14. http://dx.doi.org/10.4028/www.scientific.net/amr.304.107.

Full text
Abstract:
Model updating based on Bayesian theory for the prediction of natural frequencies of full prestressed concrete beam has been developed. Morris screening method is employed to study the sensitivity of model parameters and elastic modulus and density of concrete are determined to be updated. Cooperative work between finite element analysis program and Markov chain generating program is realized, and multiple chains with different starting points are designed to obtain the posterior distribution of model parameters. It is found that standard deviations of posterior estimates decrease compared to those of prior distributions. In addition, different starting points are selected to discuss their influence on posterior estimates of model parameters. On reducing the uncertainty of posterior estimate, the difference of each natural frequency used as witness resource is compared.
APA, Harvard, Vancouver, ISO, and other styles
46

Haj Ahmad, Hanan, Ehab M. Almetwally, Ahmed Rabaiah, and Dina A. Ramadan. "Statistical Analysis of Alpha Power Inverse Weibull Distribution under Hybrid Censored Scheme with Applications to Ball Bearings Technology and Biomedical Data." Symmetry 15, no. 1 (January 5, 2023): 161. http://dx.doi.org/10.3390/sym15010161.

Full text
Abstract:
Applications in medical technology have a massive contribution to the treatment of patients. One of the attractive tools is ball bearings. These balls support the load of the application as well as minimize friction between the surfaces. If a heavy load is applied to a ball bearing, there is the risk that the balls may be damaged and cause the bearing to fail earlier. Hence, we aim to study the model of the failure times of ball bearings. A hybrid Type-II censoring scheme is recommended to minimize the experimental time and cost where the components are following alpha power inverse Weibull distribution. A ball bearing is one example; the other is the resistance of guinea pigs exposed to dosages of virulent tubercle bacilli. We use different estimation methods to obtain point and interval estimates of the unknown parameters of the distribution; consequently, estimating statistical functions such as the hazard rate and the survival functions are observed. The maximum likelihood method and the maximum product spacing methods are used, in addition to the Bayesian estimation method, in which symmetric and asymmetric loss functions are utilized. Interval estimators are obtained for the unknown parameters using three different criteria: approximate, credible, and bootstrap confidence intervals. The performance of the parameters’ estimation is accomplished via simulation analysis and numerical methods such as Newton–Raphson and Monte Carlo Markov chains. Finally, results and conclusions support the suitability of alpha power inverse Weibull distribution under a hybrid Type-II censoring scheme for modeling real biomedical data.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhao, Bo Ya, Song Yang, Zhe Zhang, and Ri Sheng Sun. "Control Improvement of the Reactor Protection System." Applied Mechanics and Materials 71-78 (July 2011): 4199–202. http://dx.doi.org/10.4028/www.scientific.net/amm.71-78.4199.

Full text
Abstract:
In this paper an optimal maintenance policy for a Reactor Protection System (RPS) for a nuclear plant was developed. RPS consists of continuously operating sub-systems that were subject to random failures. A block system diagram for RPS had been proposed that facilitates analyzing of individual sub-systems separately. The proposed maintenance policy is the Age Replacement model, which incorporated both corrective and preventive maintenances. A Markov model was used to optimize the preventive maintenance interval of those sub-systems whose failure and repair rates were exponentially distributed. Finally, a sensitivity analysis had been performed and recommendations for maintaining the required RPS availability have been suggested.
APA, Harvard, Vancouver, ISO, and other styles
48

Kidando, Emmanuel, Ren Moses, Thobias Sando, and Eren E. Ozguven. "Evaluating Recurring Traffic Congestion using Change Point Regression and Random Variation Markov Structured Model." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 20 (July 24, 2018): 63–74. http://dx.doi.org/10.1177/0361198118787987.

Full text
Abstract:
This study develops a probabilistic framework that evaluates the dynamic evolution of recurring traffic congestion (RTC) using the random variation Markov structured regression (MSR). This approach integrates the Markov chains assumption and probit regression. The analysis was performed using traffic data from a section of Interstate 295 located in Jacksonville, Florida. These data were aggregated on a 5-minute basis for 1 year (2015). Estimating discrete traffic states to apply the MSR model, this study established a definition of traffic congestion using Bayesian change point regression (BCR), in which the speed–occupancy relationship was explored. The MSR model with flow rate as a covariate was then used to estimate the probability of RTC occurrence. Findings from the BCR model suggest that the morning peak congested state occurs once speed is below 58 miles per hour (mph), whereas the evening peak period occurs at a speed below 55 mph. Evaluating the dynamics of traffic states over time, the Bayesian information criterion confirmed the hypothesis that a first-order Markov chain assumption is sufficient to characterize RTC. Moreover, the flow rate in the MSR model was found to be statistically significant in influencing the transition probability between the traffic regimes at 95% posterior credible interval. The knowledge of RTC transition explained by the approaches presented here will facilitate developing effective intervention strategies for mitigating RTC.
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Shuifang, Xiaoxue Wu, Mengchao Wei, Yunyan Ling, Meiyan Zhu, Yan Wang, Yong Chen, Meng Jin, and Zhenwei Peng. "Cost-Effectiveness Analysis of Follow-Up Schedule for Hepatocellular Carcinoma after Radiofrequency Ablation." Journal of Oncology 2022 (March 19, 2022): 1–9. http://dx.doi.org/10.1155/2022/3569644.

Full text
Abstract:
Background and Purpose. Follow-up intervals after radiofrequency ablation (RFA) varied in different international guidelines. This study aimed to compare the cost-effectiveness of different follow-up intervals for hepatocellular carcinoma (HCC) following RFA. Methods. A Markov model was established to evaluate the cost-effectiveness of every 2 months or 2-3 months (2- to 3-month group) versus every 3 months or 3-4 months (3- to 4-month group) posttreatment surveillance in the first two years for HCC after RFA. Transition probabilities and utility values were derived from the literature review. Costs of follow-up were estimated from our institution. The incremental cost-effectiveness ratio (ICER), which was less than $10888 per quality-adjusted life-year (QALY), was considered cost-effective. Sensitivity analyses were performed to determine the uncertainty of the model. Results. The 2- to 3-month group gained 1.196 QALYs at a cost of $2212.66, while the effectiveness and cost of the 3- to 4-month group were 1.029 QALYs and $1268.92, respectively. The ICER of the 2- to 3-month group versus the 3- to 4-month group was $5651.14 per QALY gained, which was less than the willingness-to-pay threshold of 1-time gross domestic product per capita of China ($10888/QALY). One-way sensitivity analysis showed that the model was most sensitive to the utility of progression-free survival. The probabilistic sensitivity analysis demonstrated that the 2- to 3-month group had a higher probability of being more cost-effective than the 3- to 4-month group when willingness to pay was over $1088.8. Conclusions. Every 2 months or 2-3 months of follow-up intervals were more cost-effective than 3 months or 3-4 months of follow-up intervals. Thus, the intensive follow-up interval in the first two years was recommended for Child-Pugh class A or B HCC patients within the Milan criteria following RFA.
APA, Harvard, Vancouver, ISO, and other styles
50

Xie, Feng, Nan Luo, Gord Blackhouse, Ron Goeree, and Hin-Peng Lee. "Cost-effectiveness analysis of Helicobacter pylori screening in prevention of gastric cancer in Chinese." International Journal of Technology Assessment in Health Care 24, no. 01 (January 2008): 87–95. http://dx.doi.org/10.1017/s0266462307080117.

Full text
Abstract:
Objectives:The aim of this study was to evaluate the costs and effectiveness associated with no screening,Helicobacter pyloriserology screening, and the13C-urea breath test (UBT) for gastric cancer in the Chinese population.Methods:A Markov model simulation was carried out in Singaporean Chinese at 40 years of age (n= 478,500) from the perspective of public healthcare providers. The main outcome measures were costs, number of gastric cancer cases prevented, life-years saved, quality-adjusted life-years (QALYs) gained from the screening age to death, and incremental cost-effectiveness ratios (ICERs), which were compared among the three strategies. The uncertainty surrounding ICERs was addressed by scenario analyses and probabilistic sensitivity analysis using Monte Carlo simulation.Results: The ICER of serology screening versus no screening was $25,881 per QALY gained (95 percent confidence interval (95 percent CI), $5,700 to $120,000). The ICER of UBT versus no screening was $53,602 per QALY gained (95 percent CI, $16,000 to $230,000). ICER of UBT versus serology screening was $470,000 per QALY gained, for which almost all random samples of the ICERs distributed above $50,000 per QALY.Conclusions: It cannot be confidently concluded that eitherH pyloriscreening was a cost-effective strategy compared with no screening in all Chinese at the age of 40 years. Nevertheless, serology screening has demonstrated much more potential to be a cost-effective strategy, especially in the population with higher gastric cancer prevalence.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography