Journal articles on the topic 'Bayesian statistical decision theory – Applications'

To see the other types of publications on this topic, follow the link: Bayesian statistical decision theory – Applications.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayesian statistical decision theory – Applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Procaccia, H., R. Cordier, and S. Muller. "Application of Bayesian statistical decision theory for a maintenance optimization problem." Reliability Engineering & System Safety 55, no. 2 (February 1997): 143–49. http://dx.doi.org/10.1016/s0951-8320(96)00006-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Laedermann, Jean-Pascal, Jean-François Valley, and François O. Bochud. "Measurement of radioactive samples: application of the Bayesian statistical decision theory." Metrologia 42, no. 5 (September 13, 2005): 442–48. http://dx.doi.org/10.1088/0026-1394/42/5/015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Luce, Bryan R., Ya-Chen Tina Shih, and Karl Claxton. "INTRODUCTION." International Journal of Technology Assessment in Health Care 17, no. 1 (January 2001): 1–5. http://dx.doi.org/10.1017/s0266462301104010.

Full text
Abstract:
Until the mid-1980s, most economic analyses of healthcare technologies were based on decision theory and used decision-analytic models. The goal was to synthesize all relevant clinical and economic evidence for the purpose of assisting decision makers to efficiently allocate society's scarce resources. This was true of virtually all the early cost-effectiveness evaluations sponsored and/or published by the U.S. Congressional Office of Technology Assessment (OTA) (15), Centers of Disease Control and Prevention (CDC), the National Cancer Institute, other elements of the U.S. Public Health Service, and of healthcare technology assessors in Europe and elsewhere around the world. Methodologists routinely espoused, or at minimum assumed, that these economic analyses were based on decision theory (8;24;25). Since decision theory is rooted in—in fact, an informal application of—Bayesian statistical theory, these analysts were conducting studies to assist healthcare decision making by appealing to a Bayesian rather than a classical, or frequentist, inference approach. But their efforts were not so labeled. Oddly, the statistical training of these decision analysts was invariably classical, not Bayesian. Many were not—and still are not—conversant with Bayesian statistical approaches.
APA, Harvard, Vancouver, ISO, and other styles
4

Abraham, Christophe. "Asymptotics in Bayesian decision theory with applications to global robustness." Journal of Multivariate Analysis 95, no. 1 (July 2005): 50–65. http://dx.doi.org/10.1016/j.jmva.2004.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Garrett, K. A., L. V. Madden, G. Hughes, and W. F. Pfender. "New Applications of Statistical Tools in Plant Pathology." Phytopathology® 94, no. 9 (September 2004): 999–1003. http://dx.doi.org/10.1094/phyto.2004.94.9.999.

Full text
Abstract:
The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.
APA, Harvard, Vancouver, ISO, and other styles
6

Borysova, Valentyna I., and Bohdan P. Karnaukh. "Standard of proof in common law: Mathematical explication and probative value of statistical data." Journal of the National Academy of Legal Sciences of Ukraine 28, no. 2 (June 25, 2021): 171–80. http://dx.doi.org/10.37635/jnalsu.28(2).2021.171-180.

Full text
Abstract:
As a result of recent amendments to the procedural legislation of Ukraine, one may observe a tendency in judicial practice to differentiate the standards of proof depending on the type of litigation. Thus, in commercial litigation the so-called standard of “probability of evidence” applies, while in criminal proceedings – “beyond a reasonable doubt” standard applies. The purpose of this study was to find the rational justification for the differentiation of the standards of proof applied in civil (commercial) and criminal cases and to explain how the same fact is considered proven for the purposes of civil lawsuit and not proven for the purposes of criminal charge. The study is based on the methodology of Bayesian decision theory. The paper demonstrated how the principles of Bayesian decision theory can be applied to judicial fact-finding. According to Bayesian theory, the standard of proof applied depends on the ratio of the false positive error disutility to false negative error disutility. Since both types of error have the same disutility in a civil litigation, the threshold value of conviction is 50+ percent. In a criminal case, on the other hand, the disutility of false positive error considerably exceeds the disutility of the false negative one, and therefore the threshold value of conviction shall be much higher, amounting to 90 percent. Bayesian decision theory is premised on probabilistic assessments. And since the concept of probability has many meanings, the results of the application of Bayesian theory to judicial fact-finding can be interpreted in a variety of ways. When dealing with statistical evidence, it is crucial to distinguish between subjective and objective probability. Statistics indicate objective probability, while the standard of proof refers to subjective probability. Yet, in some cases, especially when statistical data is the only available evidence, the subjective probability may be roughly equivalent to the objective probability. In such cases, statistics cannot be ignored
APA, Harvard, Vancouver, ISO, and other styles
7

Mukha, V. S., and N. F. Kako. "The integrals and integral transformations connected with the joint vector Gaussian distribution." Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 57, no. 2 (July 16, 2021): 206–16. http://dx.doi.org/10.29235/1561-2430-2021-57-2-206-216.

Full text
Abstract:
In many applications it is desirable to consider not one random vector but a number of random vectors with the joint distribution. This paper is devoted to the integral and integral transformations connected with the joint vector Gaussian probability density function. Such integral and transformations arise in the statistical decision theory, particularly, in the dual control theory based on the statistical decision theory. One of the results represented in the paper is the integral of the joint Gaussian probability density function. The other results are the total probability formula and Bayes formula formulated in terms of the joint vector Gaussian probability density function. As an example the Bayesian estimations of the coefficients of the multiple regression function are obtained. The proposed integrals can be used as table integrals in various fields of research.
APA, Harvard, Vancouver, ISO, and other styles
8

Girtler, Jerzy. "Limiting Distribution of the Three-State Semi-Markov Model of Technical State Transitions of Ship Power Plant Machines and its Applicability in Operational Decision-Making." Polish Maritime Research 27, no. 2 (June 1, 2020): 136–44. http://dx.doi.org/10.2478/pomr-2020-0035.

Full text
Abstract:
AbstractThe article presents the three-state semi-Markov model of the process {W(t): t ≥ 0} of state transitions of a ship power plant machine, with the following interpretation of these states: s1 – state of full serviceability, s2 – state of partial serviceability, and s3 – state of unserviceability. These states are precisely defined for the ship main engine (ME). A hypothesis is proposed which explains the possibility of application of this model to examine models of real state transitions of ship power plant machines. Empirical data concerning ME were used for calculating limiting probabilities for the process {W(t): t ≥ 0}. The applicability of these probabilities in decision making with the assistance of the Bayesian statistical theory is demonstrated. The probabilities were calculated using a procedure included in the computational software MATHEMATICA, taking into consideration the fact that the random variables representing state transition times of the process {W(t): t ≥ 0} have gamma distributions. The usefulness of the Bayesian statistical theory in operational decision-making concerning ship power plants is shown using a decision dendrite which maps ME states and consequences of particular decisions, thus making it possible to choose between the following two decisions: d1 – first perform a relevant preventive service of the engine to restore its state and then perform the commissioned task within the time limit determined by the customer, and d2 – omit the preventive service and start performing the commissioned task.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Shun, Qin Xu, and Pengfei Zhang. "Identifying Doppler Velocity Contamination Caused by Migrating Birds. Part II: Bayes Identification and Probability Tests." Journal of Atmospheric and Oceanic Technology 22, no. 8 (August 1, 2005): 1114–21. http://dx.doi.org/10.1175/jtech1758.1.

Full text
Abstract:
Abstract Based on the Bayesian statistical decision theory, a probabilistic quality control (QC) technique is developed to identify and flag migrating-bird-contaminated sweeps of level II velocity scans at the lowest elevation angle using the QC parameters presented in Part I. The QC technique can use either each single QC parameter or all three in combination. The single-parameter QC technique is shown to be useful for evaluating the effectiveness of each QC parameter based on the smallness of the tested percentages of wrong decision by using the ground truth information (if available) or based on the smallness of the estimated probabilities of wrong decision (if there is no ground truth information). The multiparameter QC technique is demonstrated to be much better than any of the three single-parameter QC techniques, as indicated by the very small value of the tested percentages of wrong decision for no-flag decisions (not contaminated by migrating birds). Since the averages of the estimated probabilities of wrong decision are quite close to the tested percentages of wrong decision, they can provide useful information about the probability of wrong decision when the multiparameter QC technique is used for real applications (with no ground truth information).
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Rui, Long Han, and Hujun Geng. "Implementation and Error Analysis of MNIST Handwritten Dataset Classification Based on Bayesian Decision Classifier." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012049. http://dx.doi.org/10.1088/1742-6596/2171/1/012049.

Full text
Abstract:
Abstract In recent years, with the continuous development of computer technology, pattern recognition technology has gradually entered people’s life and learning, and people’s demand for pattern recognition technology is also growing.In order to adapt to people’s life and study, the application of pattern recognition theory is more and more, such as speech recognition, character recognition, face recognition and so on.The main methods of pattern recognition are statistics, clustering,neural network and artificial intelligence.Statistical method is one of the most classic methods, and Bayesian classification is widely used in statistical method because of its convenience and good classification effect.
APA, Harvard, Vancouver, ISO, and other styles
11

Mukha, V. S., and N. F. Kako. "Integrals and integral transformations related to the vector Gaussian distribution." Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 55, no. 4 (January 7, 2020): 457–66. http://dx.doi.org/10.29235/1561-2430-2019-55-4-457-466.

Full text
Abstract:
This paper is dedicated to the integrals and integral transformations related to the probability density function of the vector Gaussian distribution and arising in probability applications. Herein, we present three integrals that permit to calculate the moments of the multivariate Gaussian distribution. Moreover, the total probability formula and Bayes formula for the vector Gaussian distribution are given. The obtained results are proven. The deduction of the integrals is performed on the basis of the Gauss elimination method. The total probability formula and Bayes formula are obtained on the basis of the proven integrals. These integrals and integral transformations could be used, for example, in the statistical decision theory, particularly, in the dual control theory, and as table integrals in various areas of research. On the basis of the obtained results, Bayesian estimations of the coefficients of the multiple regression function are calculated.
APA, Harvard, Vancouver, ISO, and other styles
12

Chang, Joshua C., Julia Porcino, Elizabeth K. Rasch, and Larry Tang. "Regularized Bayesian calibration and scoring of the WD-FAB IRT model improves predictive performance over marginal maximum likelihood." PLOS ONE 17, no. 4 (April 8, 2022): e0266350. http://dx.doi.org/10.1371/journal.pone.0266350.

Full text
Abstract:
Item response theory (IRT) is the statistical paradigm underlying a dominant family of generative probabilistic models for test responses, used to quantify traits in individuals relative to target populations. The graded response model (GRM) is a particular IRT model that is used for ordered polytomous test responses. Both the development and the application of the GRM and other IRT models require statistical decisions. For formulating these models (calibration), one needs to decide on methodologies for item selection, inference, and regularization. For applying these models (test scoring), one needs to make similar decisions, often prioritizing computational tractability and/or interpretability. In many applications, such as in the Work Disability Functional Assessment Battery (WD-FAB), tractability implies approximating an individual’s score distribution using estimates of mean and variance, and obtaining that score conditional on only point estimates of the calibrated model. In this manuscript, we evaluate the calibration and scoring of models under this common use-case using Bayesian cross-validation. Applied to the WD-FAB responses collected for the National Institutes of Health, we assess the predictive power of implementations of the GRM based on their ability to yield, on validation sets of respondents, ability estimates that are most predictive of patterns of item responses. Our main finding indicates that regularized Bayesian calibration of the GRM outperforms the regularization-free empirical Bayesian procedure of marginal maximum likelihood. We also motivate the use of compactly supported priors in test scoring.
APA, Harvard, Vancouver, ISO, and other styles
13

FELGAER, PABLO, PAOLA BRITOS, and RAMÓN GARCÍA-MARTÍNEZ. "PREDICTION IN HEALTH DOMAIN USING BAYESIAN NETWORKS OPTIMIZATION BASED ON INDUCTION LEARNING TECHNIQUES." International Journal of Modern Physics C 17, no. 03 (March 2006): 447–55. http://dx.doi.org/10.1142/s0129183106008558.

Full text
Abstract:
A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.
APA, Harvard, Vancouver, ISO, and other styles
14

STAUFFER, HOWARD B. "APPLICATION OF BAYESIAN STATISTICAL INFERENCE AND DECISION THEORY TO A FUNDAMENTAL PROBLEM IN NATURAL RESOURCE SCIENCE: THE ADAPTIVE MANAGEMENT OF AN ENDANGERED SPECIES." Natural Resource Modeling 21, no. 2 (April 29, 2008): 264–84. http://dx.doi.org/10.1111/j.1939-7445.2008.00007.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Prateepasen, Asa, Pakorn Kaewtrakulpong, and Chalermkiat Jirarungsatean. "Semi-Parametric Learning for Classification of Pitting Corrosion Detected by Acoustic Emission." Key Engineering Materials 321-323 (October 2006): 549–52. http://dx.doi.org/10.4028/www.scientific.net/kem.321-323.549.

Full text
Abstract:
This paper presents a Non-Destructive Testing (NDT) technique, Acoustic Emission (AE) to classify pitting corrosion severity in austenitic stainless steel 304 (SS304). The corrosion severity is graded roughly into five levels based on the depth of corrosion. A number of timedomain AE parameters were extracted and used as features in our classification methods. In this work, we present practical classification techniques based on Bayesian Statistical Decision Theory, namely Maximum A Posteriori (MAP) and Maximum Likelihood (ML) classifiers. Mixture of Gaussian distributions is used as the class-conditional probability density function for the classifiers. The mixture model has several appealing attributes such as the ability to model any probability density function (pdf) with any precision and the efficiency of parameter-estimation algorithm. However, the model still suffers from model-order-selection and initialization problems which greatly limit its applications. In this work, we introduced a semi-parametric scheme for learning the mixture model which can solve the mentioned difficulties. The method was compared with conventional Feed-Forward Neural Network (FFNN) and Probabilistic Neural Network (PNN) to evaluate its performance. We found that our proposed methods gave much lower classificationerror rate and also far smaller variance of the classifiers.
APA, Harvard, Vancouver, ISO, and other styles
16

Germano, J. D. "Ecology, statistics, and the art of misdiagnosis: The need for a paradigm shift." Environmental Reviews 7, no. 4 (December 1, 1999): 167–90. http://dx.doi.org/10.1139/a99-014.

Full text
Abstract:
This paper approaches ecological data analysis from a different vantage point and has implications for ecological risk assessment. Despite all the advances in theoretical ecology over the past four decades and the huge amounts of data that have been collected in various marine monitoring programs, we still do not know enough about how marine ecosystems function to be able to make valid predictions of impacts before they occur, accurately assess ecosystem ``health,'' or perform valid risk assessments. Comparisons are made among the fields of psychology, social science, and ecology in terms of the applications of decision theory or approach to problem diagnosis. In all of these disciplines, researchers are dealing with phenomena whose mechanisms are poorly understood. One of the biggest impediments to the interpretation of ecological data and the advancement of our understanding about ecosystem function is the desire of marine scientists and policy regulators to cling to the ritual of null hypothesis significance testing (NHST) with mechanical dichotomous decisions around a sacred 0.05 criterion. The paper is divided into three main sections: first, a brief overview of common misunderstandings about NHST; second, why diagnosis of ecosystem health is and will be such a difficult task; and finally, some suggestions about alternative approaches for ecologists to improve our "diagnostic accuracy'' by taking heed of lessons learned in the fields of clinical psychology and medical epidemiology. Key words: statistical significance, Bayesian statistics, risk assessment
APA, Harvard, Vancouver, ISO, and other styles
17

Manski, Charles F., and Aleksey Tetenov. "Sufficient trial size to inform clinical practice." Proceedings of the National Academy of Sciences 113, no. 38 (September 6, 2016): 10518–23. http://dx.doi.org/10.1073/pnas.1612174113.

Full text
Abstract:
Medical research has evolved conventions for choosing sample size in randomized clinical trials that rest on the theory of hypothesis testing. Bayesian statisticians have argued that trials should be designed to maximize subjective expected utility in settings of clinical interest. This perspective is compelling given a credible prior distribution on treatment response, but there is rarely consensus on what the subjective prior beliefs should be. We use Wald’s frequentist statistical decision theory to study design of trials under ambiguity. We show that ε-optimal rules exist when trials have large enough sample size. An ε-optimal rule has expected welfare within ε of the welfare of the best treatment in every state of nature. Equivalently, it has maximum regret no larger than ε. We consider trials that draw predetermined numbers of subjects at random within groups stratified by covariates and treatments. We report exact results for the special case of two treatments and binary outcomes. We give simple sufficient conditions on sample sizes that ensure existence of ε-optimal treatment rules when there are multiple treatments and outcomes are bounded. These conditions are obtained by application of Hoeffding large deviations inequalities to evaluate the performance of empirical success rules.
APA, Harvard, Vancouver, ISO, and other styles
18

Mousavi, Seyed Pezhman, Saeid Atashrouz, Menad Nait Amar, Abdolhossein Hemmati-Sarapardeh, Ahmad Mohaddespour, and Amir Mosavi. "Viscosity of Ionic Liquids: Application of the Eyring’s Theory and a Committee Machine Intelligent System." Molecules 26, no. 1 (December 31, 2020): 156. http://dx.doi.org/10.3390/molecules26010156.

Full text
Abstract:
Accurate determination of the physicochemical characteristics of ionic liquids (ILs), especially viscosity, at widespread operating conditions is of a vital role for various fields. In this study, the viscosity of pure ILs is modeled using three approaches: (I) a simple group contribution method based on temperature, pressure, boiling temperature, acentric factor, molecular weight, critical temperature, critical pressure, and critical volume; (II) a model based on thermodynamic properties, pressure, and temperature; and (III) a model based on chemical structure, pressure, and temperature. Furthermore, Eyring’s absolute rate theory is used to predict viscosity based on boiling temperature and temperature. To develop Model (I), a simple correlation was applied, while for Models (II) and (III), smart approaches such as multilayer perceptron networks optimized by a Levenberg–Marquardt algorithm (MLP-LMA) and Bayesian Regularization (MLP-BR), decision tree (DT), and least square support vector machine optimized by bat algorithm (BAT-LSSVM) were utilized to establish robust and accurate predictive paradigms. These approaches were implemented using a large database consisting of 2813 experimental viscosity points from 45 different ILs under an extensive range of pressure and temperature. Afterward, the four most accurate models were selected to construct a committee machine intelligent system (CMIS). Eyring’s theory’s results to predict the viscosity demonstrated that although the theory is not precise, its simplicity is still beneficial. The proposed CMIS model provides the most precise responses with an absolute average relative deviation (AARD) of less than 4% for predicting the viscosity of ILs based on Model (II) and (III). Lastly, the applicability domain of the CMIS model and the quality of experimental data were assessed through the Leverage statistical method. It is concluded that intelligent-based predictive models are powerful alternatives for time-consuming and expensive experimental processes of the ILs viscosity measurement.
APA, Harvard, Vancouver, ISO, and other styles
19

Schwarz, Johanna, and Dominik Heider. "GUESS: projecting machine learning scores to well-calibrated probability estimates for clinical decision-making." Bioinformatics 35, no. 14 (November 29, 2018): 2458–65. http://dx.doi.org/10.1093/bioinformatics/bty984.

Full text
Abstract:
Abstract Motivation Clinical decision support systems have been applied in numerous fields, ranging from cancer survival toward drug resistance prediction. Nevertheless, clinical decision support systems typically have a caveat: many of them are perceived as black-boxes by non-experts and, unfortunately, the obtained scores cannot usually be interpreted as class probability estimates. In probability-focused medical applications, it is not sufficient to perform well with regards to discrimination and, consequently, various calibration methods have been developed to enable probabilistic interpretation. The aims of this study were (i) to develop a tool for fast and comparative analysis of different calibration methods, (ii) to demonstrate their limitations for the use on clinical data and (iii) to introduce our novel method GUESS. Results We compared the performances of two different state-of-the-art calibration methods, namely histogram binning and Bayesian Binning in Quantiles, as well as our novel method GUESS on both, simulated and real-world datasets. GUESS demonstrated calibration performance comparable to the state-of-the-art methods and always retained accurate class discrimination. GUESS showed superior calibration performance in small datasets and therefore may be an optimal calibration method for typical clinical datasets. Moreover, we provide a framework (CalibratR) for R, which can be used to identify the most suitable calibration method for novel datasets in a timely and efficient manner. Using calibrated probability estimates instead of original classifier scores will contribute to the acceptance and dissemination of machine learning based classification models in cost-sensitive applications, such as clinical research. Availability and implementation GUESS as part of CalibratR can be downloaded at CRAN.
APA, Harvard, Vancouver, ISO, and other styles
20

Solodov, A. A. "Mathematical Formalization and Algorithmization of the Main Modules of Organizational and Technical Systems." Statistics and Economics 17, no. 4 (September 6, 2020): 96–104. http://dx.doi.org/10.21686/2500-3925-2020-4-96-104.

Full text
Abstract:
The purpose of the research is to develop a generalized structural scheme of organizational and technical systems based on the general theory of management, which contains the necessary and sufficient number of modules and formalize on this basis the main management tasks that act as goals of the behavior of the management object. The main modules that directly implement the management process are the status assessment module of organizational and technical systems and the management module. It is shown that in traditional organizational and technical systems, including the decision-maker, the key module is the state assessment module of organizational and technical systems. In this regard, the key aspect of the work is to study the optimal algorithms for evaluating the state of processes occurring in the organizational and technical systems and develop on this basis the principles of mathematical formalization and algorithmization of the status assessment module. The research method is the application of the principles of the theory of statistical estimates of random processes occurring in the organizational and technical systems against the background of interference and the synthesis of algorithms for the functioning of the status assessment module on this basis. It is shown that a characteristic feature of random processes occurring in organizational and technical systems is their essentially discrete nature and Poisson statistics. A mathematical description of the statistical characteristics of point random processes is formulated, which is suitable for solving the main problems of process evaluation and management in organizational and technical systems. The main results were the definition of state space of the organizational and technical systems, the development of a generalized structural scheme of the organizational and technical systems in state space that includes the modules forming the state variable of the module assessment and module management. This mathematical interpretation of the organizational and technical systems structure allowed us to formalize the main problems solved by typical organizational and technical systems and consider optimal algorithms for solving such problems. The assumption when considering the problems of synthesis of optimal algorithms is to optimize the status assessment module of organizational and technical systems and the control module separately, while the main attention is paid to the consideration of optimal estimation algorithms. The formalization and algorithmization of the organizational and technical systems behavior is undertaken mainly in terms of the Bayesian criterion of optimal statistical estimates. Various methods of overcoming a priori uncertainty typical for the development of real organizational and technical systems are indicated. Methods of adaptation are discussed, including Bayesian adaptation of the decision-making procedure under conditions of a priori uncertainty. Using a special case of the Central limit theorem, an asymptotic statistical relationship between the mentioned point processes and traditional Gaussian processes is established. As an example, a nontrivial problem of optimal detection of Poisson signal against a background of Poisson noise is considered; graphs of the potential noise immunity of this algorithm are calculated and presented. The corresponding references are given to the previously obtained results of estimates of Poisson processes. For automatic organizational and technical systems, the generally accepted criteria for the quality of management of such systems are specified. The result of the review is a classification of methods for formalization and algorithmization of problems describing the behavior of organizational and technical systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Kiranagi, Manasi, Devika Dhoble, Madeeha Tahoor, and Dr Rekha Patil. "Finding Optimal Path and Privacy Preserving for Wireless Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 10 (October 31, 2022): 360–65. http://dx.doi.org/10.22214/ijraset.2022.46949.

Full text
Abstract:
Abstract: Privacy-preserving routing protocols in wireless networks frequently utilize additional artificial traffic to hide the source-destination identities of the communicating pair. Usually, the addition of artificial traffic is done heuristically with no guarantees that the transmission cost, latency, etc., are optimized in every network topology. We explicitly examine the privacyutility trade-off problem for wireless networks and develop a novel privacy-preserving routing algorithm called Optimal Privacy Enhancing Routing Algorithm (OPERA). OPERA uses a statistical decision-making framework to optimize the privacy of the routing protocol given a utility (or cost) constraint. We consider global adversaries with both Lossless and lossy observations that use the Bayesian maximum-a-posteriori (MAP) estimation strategy. We formulate the privacy-utility trade-off problem as a linear program which can be efficiently solved.
APA, Harvard, Vancouver, ISO, and other styles
22

Olena, SHCHETININA, SMYRNOVA Olesia, and KOTLIAR Valerii. "FINANCIAL MODELING: PROBABILITY THEORETIC APPROACHES." Herald of Kyiv National University of Trade and Economics 139, no. 5 (October 25, 2021): 127–38. http://dx.doi.org/10.31617/visnik.knute.2021(139)09.

Full text
Abstract:
Background. A large number of significant socio-economic events occur under the influence of unique factors. Formal application of probabilistic and statistical methods in such cases leads to analytical conclusions without sufficient scientific justification. Financial modeling reflects modern approaches to the probability interpretation, provides introduction and systematization of risk indicators, and the necessity of improving theoretical and probabilistic disciplines of economic orientation. Analysis of recent research and publications has shown that despite significant investigations, financial modeling is not theoretically complete scientific direction in terms of economic risk indicators and derivative characteristics, important scientific and practical problems remain unresolved in the analysis of socio-economic phenomena in unce­rtainty and implementation of modern achievements of scientists to the process. The aim of the article is to study theoretical and probabilistic concepts of socio-economic processes in conditions of uncertainty and uniqueness based on the financial modeling methods. Materials and methods. Analytical and statistical methods, methods of mathematical statistics and probability theory are used in the research process. Information database is data from trading sessions of world stock markets. Results. Theoretical and probabilistic concepts, including interpretations of probability and risk are considered through formalization of the analysis process by the subject of the socio-economic phenomenon in conditions of uncertainty. Models of typical stationary, dynamic, parity and dominant lotteries with introduced risk indicators are built. Risk is interpreted as the ratio of negative and favorable factors of the phenomenon information background. Relevant indicators are illustrated and calculated using various socio-economic and financial cases. Subjective-probabilistic modeling (SPM) in relation to decision-making in the financial market is studied as the development of Bayesian subjectivism. It has been shown that group consensus SPM-assessments of risk generate specific derivative financial instruments such as binary options, index derivatives, crypto-assets, etc. Conclusion. The results of the study showed the application effectiveness of financial modeling methods of risks assessment in financial markets, the prospects of relevant development in the field of financial engineering. Teaching economic disciplines, which are based on theoretical and probabilistic postulates, statistical and analytical-statistical procedures for calculating probabilistic indicators (probability, risk, prevention regulations, etc.), requires significant addition using the introduction of new methods of information analysis of social background, financial sphere to determine the optimal direction of development and investment activities. Keywords: risk ratio, probability interpretation, binary options, financial modeling, high-risk financial markets, subjective-probabilistic modeling.
APA, Harvard, Vancouver, ISO, and other styles
23

Motomura, Yoichi. "Bayesian Network: Probabilistic Reasoning, Statistical Learning, and Applications." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 2 (March 20, 2004): 93–99. http://dx.doi.org/10.20965/jaciii.2004.p0093.

Full text
Abstract:
Bayesian networks are probabilistic models that can be used for prediction and decision-making in the presence of uncertainty. For intelligent information processing, probabilistic reasoning based on Bayesian networks can be used to cope with uncertainty in real-world domains. In order to apply this, we need appropriate models and statistical learning methods to obtain models. We start by reviewing Bayesian network models, probabilistic reasoning, statistical learning, and related researches. Then, we introduce applications for intelligent information processing using Bayesian networks.
APA, Harvard, Vancouver, ISO, and other styles
24

de la Horra, Julián. "Bayesian robustness of the quantile loss in statistical decision theory." Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas 107, no. 2 (May 16, 2012): 451–58. http://dx.doi.org/10.1007/s13398-012-0070-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ziegel, Eric R., S. Panchapakesan, and N. Balakrishnan. "Advances in Statistical Decision Theory and Applications." Technometrics 40, no. 2 (May 1998): 165. http://dx.doi.org/10.2307/1270670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

MTW, S. Panchapakesan, and N. Balakrishnan. "Advances in Statistical Decision Theory and Applications." Journal of the American Statistical Association 95, no. 450 (June 2000): 689. http://dx.doi.org/10.2307/2669433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Pinyuen, S. Panchapakesan, and N. Balakrishnan. "Advances in Statistical Decision Theory and Applications." Journal of the American Statistical Association 93, no. 443 (September 1998): 1239. http://dx.doi.org/10.2307/2669874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Panchapakesan, S., and N. Balakrishnan. "Advances in Statistical Decision Theory and Applications." Technometrics 40, no. 2 (May 1998): 165. http://dx.doi.org/10.1080/00401706.1998.10485218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Vos, Hans J. "Applications of Bayesian Decision Theory to Sequential Mastery Testing." Journal of Educational and Behavioral Statistics 24, no. 3 (September 1999): 271–92. http://dx.doi.org/10.3102/10769986024003271.

Full text
Abstract:
The purpose of this paper is to formulate optimal sequential rules for mastery tests. The framework for the approach is derived from Bayesian sequential decision theory. Both a threshold and linear loss structure are considered. The binomial probability distribution is adopted as the psychometric model involved. Conditions sufficient for sequentially setting optimal cutting scores are presented. Optimal sequential rules will be derived for the case of a subjective beta distribution representing prior true level of functioning. An empirical example of sequential mastery esting for concept-learning in medicine concludes the paper.
APA, Harvard, Vancouver, ISO, and other styles
30

Vos, Hans J. "Applications of Bayesian Decision Theory to Sequential Mastery Testing." Journal of Educational and Behavioral Statistics 24, no. 3 (1999): 271. http://dx.doi.org/10.2307/1165325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Vos, Hans J. "Applications of Bayesian decision theory to intelligent tutoring systems." Computers in Human Behavior 11, no. 1 (March 1995): 149–62. http://dx.doi.org/10.1016/0747-5632(94)00029-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kenett, Ron S. "Bayesian networks: Theory, applications and sensitivity issues." Encyclopedia with Semantic Computing and Robotic Intelligence 01, no. 01 (March 2017): 1630014. http://dx.doi.org/10.1142/s2425038416300147.

Full text
Abstract:
This chapter is about an important tool in the data science workbench, Bayesian networks (BNs). Data science is about generating information from a given data set using applications of statistical methods. The quality of the information derived from data analysis is dependent on various dimensions, including the communication of results, the ability to translate results into actionable tasks and the capability to integrate various data sources [R. S. Kenett and G. Shmueli, On information quality, J. R. Stat. Soc. A 177(1), 3 (2014).] This paper demonstrates, with three examples, how the application of BNs provides a high level of information quality. It expands the treatment of BNs as a statistical tool and provides a wider scope of statistical analysis that matches current trends in data science. For more examples on deriving high information quality with BNs see [R. S. Kenett and G. Shmueli, Information Quality: The Potential of Data and Analytics to Generate Knowledge (John Wiley and Sons, 2016), www.wiley.com/go/information_quality.] The three examples used in the chapter are complementary in scope. The first example is based on expert opinion assessments of risks in the operation of health care monitoring systems in a hospital environment. The second example is from the monitoring of an open source community and is a data rich application that combines expert opinion, social network analysis and continuous operational variables. The third example is totally data driven and is based on an extensive customer satisfaction survey of airline customers. The first section is an introduction to BNs, Sec. 2 provides a theoretical background on BN. Examples are provided in Sec. 3. Section 4 discusses sensitivity analysis of BNs, Sec. 5 lists a range of software applications implementing BNs. Section 6 concludes the chapter.
APA, Harvard, Vancouver, ISO, and other styles
33

Reinhardt, Howard E. "Statistical Decision Theory and Bayesian Analysis. Second Edition (James O. Berger)." SIAM Review 29, no. 3 (September 1987): 487–89. http://dx.doi.org/10.1137/1029095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Guan, J. W., Z. Guan, and D. A. Bell. "Bayesian probability on boolean algebras and applications to decision theory." Information Sciences 97, no. 3-4 (April 1997): 267–92. http://dx.doi.org/10.1016/s0020-0255(96)00192-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ito, Yoshifusa, and Cidambi Srinivasan. "Bayesian decision theory on three-layer neural networks." Neurocomputing 63 (January 2005): 209–28. http://dx.doi.org/10.1016/j.neucom.2004.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Karr, Alan F. "Statistical Matching: A Frequentist Theory, Practical Applications and Alternative Bayesian Approaches." Journal of the American Statistical Association 102, no. 477 (March 2007): 386. http://dx.doi.org/10.1198/jasa.2007.s175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Binmore, Ken. "A minimal extension of Bayesian decision theory." Theory and Decision 80, no. 3 (May 22, 2015): 341–62. http://dx.doi.org/10.1007/s11238-015-9505-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lehmann, Harold P., and Edward H. Shortliffe. "THOMAS: building Bayesian statistical expert systems to aid in clinical decision making." Computer Methods and Programs in Biomedicine 35, no. 4 (August 1991): 251–60. http://dx.doi.org/10.1016/0169-2607(91)90003-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Geisler, Wilson S., and Randy L. Diehl. "Bayesian natural selection and the evolution of perceptual systems." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 357, no. 1420 (April 29, 2002): 419–48. http://dx.doi.org/10.1098/rstb.2001.1055.

Full text
Abstract:
In recent years, there has been much interest in characterizing statistical properties of natural stimuli in order to better understand the design of perceptual systems. A fruitful approach has been to compare the processing of natural stimuli in real perceptual systems with that of ideal observers derived within the framework of Bayesian statistical decision theory. While this form of optimization theory has provided a deeper understanding of the information contained in natural stimuli as well as of the computational principles employed in perceptual systems, it does not directly consider the process of natural selection, which is ultimately responsible for design. Here we propose a formal framework for analysing how the statistics of natural stimuli and the process of natural selection interact to determine the design of perceptual systems. The framework consists of two complementary components. The first is a maximum fitness ideal observer, a standard Bayesian ideal observer with a utility function appropriate for natural selection. The second component is a formal version of natural selection based upon Bayesian statistical decision theory. Maximum fitness ideal observers and Bayesian natural selection are demonstrated in several examples. We suggest that the Bayesian approach is appropriate not only for the study of perceptual systems but also for the study of many other systems in biology.
APA, Harvard, Vancouver, ISO, and other styles
40

De Waal, D. J. "Summary on Bayes estimation and hypothesis testing." Suid-Afrikaanse Tydskrif vir Natuurwetenskap en Tegnologie 7, no. 1 (March 17, 1988): 28–32. http://dx.doi.org/10.4102/satnt.v7i1.896.

Full text
Abstract:
Although Bayes’ theorem was published in 1764, it is only recently that Bayesian procedures were used in practice in statistical analyses. Many developments have taken place and are still taking place in the areas of decision theory and group decision making. Two aspects, namely that of estimation and tests of hypotheses, will be looked into. This is the area of statistical inference mainly concerned with Mathematical Statistics.
APA, Harvard, Vancouver, ISO, and other styles
41

Galvani, Marta, Chiara Bardelli, Silvia Figini, and Pietro Muliere. "A Bayesian Nonparametric Learning Approach to Ensemble Models Using the Proper Bayesian Bootstrap." Algorithms 14, no. 1 (January 3, 2021): 11. http://dx.doi.org/10.3390/a14010011.

Full text
Abstract:
Bootstrap resampling techniques, introduced by Efron and Rubin, can be presented in a general Bayesian framework, approximating the statistical distribution of a statistical functional ϕ(F), where F is a random distribution function. Efron’s and Rubin’s bootstrap procedures can be extended, introducing an informative prior through the Proper Bayesian bootstrap. In this paper different bootstrap techniques are used and compared in predictive classification and regression models based on ensemble approaches, i.e., bagging models involving decision trees. Proper Bayesian bootstrap, proposed by Muliere and Secchi, is used to sample the posterior distribution over trees, introducing prior distributions on the covariates and the target variable. The results obtained are compared with respect to other competitive procedures employing different bootstrap techniques. The empirical analysis reports the results obtained on simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
42

Wijayanti, Rina. "PENAKSIRAN PARAMETER ANALISIS REGRESI COX DAN ANALISIS SURVIVAL BAYESIAN." PRISMATIKA: Jurnal Pendidikan dan Riset Matematika 1, no. 2 (June 1, 2019): 16–26. http://dx.doi.org/10.33503/prismatika.v1i2.427.

Full text
Abstract:
In the theory of estimation, there are two approaches, namely the classical statistical approach and global statistical approach (Bayesian). Classical statistics are statistics in which the procedure is the decision based only on the data samples taken from the population. While Bayesian statistics in making decisions based on new information from the observed data (sample) and prior knowledge. At this writing Cox Regression Analysis will be taken as an example of parameter estimation by the classical statistical approach Survival Analysis and Bayesian statistical approach as an example of global (Bayesian). Survival Bayesial parameter estimation using MCMC algorithms for model complex / complicated and difficult to resolve while the Cox regression models using the method of partial likelihood. Results of the parameter estimates do not close form that needs to be done by the method of Newton-Raphson iteration.
APA, Harvard, Vancouver, ISO, and other styles
43

Smith, Kevin M. "Decision Making in Complex Environments." International Journal of Aviation Systems, Operations and Training 4, no. 2 (July 2017): 1–14. http://dx.doi.org/10.4018/ijasot.2017070101.

Full text
Abstract:
Bayesian probability theory, signal detection theory, and operational decision theory are combined to understand how one can operate effectively in complex environments, which requires uncommon skill sets for performance optimization. The analytics of uncertainty in the form of Bayesian theorem applied to a moving object is presented, followed by how operational decision making is applicable to all complex environments. Large-scale dynamic systems have erratic behavior, so there is a need to effectively manage risk. Risk management needs to be addressed from the standpoint of convergent technology applications and performance modeling. The example of an airplane during takeoff shows how a risk continuum needs to be developed. An unambiguous demarcation line for low, moderate, and high risk is made and the decision analytical structure for all operational decisions is developed. Three mission-critical decisions are discussed to optimize performance: to continue or abandon the mission, the approach go-around maneuver, and the takeoff go/no-go decision.
APA, Harvard, Vancouver, ISO, and other styles
44

Hsu, Che-Chang, Kuo-Shong Wang, and Shih-Hsing Chang. "Bayesian decision theory for support vector machines: Imbalance measurement and feature optimization." Expert Systems with Applications 38, no. 5 (May 2011): 4698–704. http://dx.doi.org/10.1016/j.eswa.2010.08.150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rodgers, Mark, and Rosa Oppenheim. "Ishikawa diagrams and Bayesian belief networks for continuous improvement applications." TQM Journal 31, no. 3 (May 8, 2019): 294–318. http://dx.doi.org/10.1108/tqm-11-2018-0184.

Full text
Abstract:
Purpose In continuous improvement (CI) projects, cause-and-effect diagrams are used to qualitatively express the relationship between a given problem and its root causes. However, when data collection activities are limited, and advanced statistical analyses are not possible, practitioners need to understand causal relationships. The paper aims to discuss these issues. Design/methodology/approach In this research, the authors present a framework that combines cause-and-effect diagrams with Bayesian belief networks (BBNs) to estimate causal relationships in instances where formal data collection/analysis activities are too costly or impractical. Specifically, the authors use cause-and-effect diagrams to create causal networks, and leverage elicitation methods to estimate the likelihood of risk scenarios by means of computer-based simulation. Findings This framework enables CI practitioners to leverage qualitative data and expertise to conduct in-depth statistical analysis in the event that data collection activities cannot be fully executed. Furthermore, this allows CI practitioners to identify critical root causes of a given problem under investigation before generating solutions. Originality/value This is the first framework that translates qualitative insights from a cause-and-effect diagram into a closed-form relationship between inputs and outputs by means of BBN models, simulation and regression.
APA, Harvard, Vancouver, ISO, and other styles
46

North, D. W. "Analysis of Uncertainty and Reaching Broad Conclusions." Journal of the American College of Toxicology 7, no. 5 (September 1988): 583–90. http://dx.doi.org/10.3109/10915818809019535.

Full text
Abstract:
Probability theory can provide a general way of reasoning about uncertainty, even when data are sparse or absent. The idea that probabilities can represent judgment is a basic principle for decision analysis and for the Bayesian school of statistics. The use of judgmental probabilities and Bayesian statistical methods for the analysis of toxicological data appears to be promising in reaching broad conclusions for policy and for research planning. Illustrative examples are given using quantal dose-response data from carcinogenicity bioassays for two chemicals, perchloroethylene and alachlor.
APA, Harvard, Vancouver, ISO, and other styles
47

Nassar, Mazen, Refah Alotaibi, Hassan Okasha, and Liang Wang. "Bayesian Estimation Using Expected LINEX Loss Function: A Novel Approach with Applications." Mathematics 10, no. 3 (January 29, 2022): 436. http://dx.doi.org/10.3390/math10030436.

Full text
Abstract:
The loss function plays an important role in Bayesian analysis and decision theory. In this paper, a new Bayesian approach is introduced for parameter estimation under the asymmetric linear-exponential (LINEX) loss function. In order to provide a robust estimation and avoid making subjective choices, the proposed method assumes that the parameter of the LINEX loss function has a probability distribution. The Bayesian estimator is then obtained by taking the expectation of the common LINEX-based Bayesian estimator over the probability distribution. This alternative proposed method is applied to estimate the exponential parameter by considering three different distributions of the LINEX parameter, and the associated Bayes risks are also obtained in consequence. Extensive simulation studies are conducted in order to compare the performance of the proposed new estimators. In addition, three real data sets are analyzed to investigate the applicability of the proposed results. The results of the simulation and real data analysis show that the proposed estimation works satisfactorily and performs better than the conventional standard Bayesian approach in terms of minimum mean square error and Bayes risk.
APA, Harvard, Vancouver, ISO, and other styles
48

Vadde, S., J. K. Allen, and F. Mistree. "The Bayesian Compromise Decision Support Problem for Multilevel Design Involving Uncertainty." Journal of Mechanical Design 116, no. 2 (June 1, 1994): 388–95. http://dx.doi.org/10.1115/1.2919391.

Full text
Abstract:
In this paper we present an extension to the traditional compromise Decision Support Problem (DSP) formulation. In this formulation we use Bayesian statistics to model uncertainties associated with the information being used. In an earlier paper we have introduced a compromise DSP that accounts for uncertainty using fuzzy set theory. In this paper we describe the Bayesian Decision Support Problem. We use this formulation to design a portal frame structure. We discuss the results and compare them with those obtained using the fuzzy DSP. Finally, we discuss the efficacy of incorporating Bayesian statistics into the traditional compromise DSP formulation and describe some of the pending research issues.
APA, Harvard, Vancouver, ISO, and other styles
49

Ayanbayev, Birzhan, Ilja Klebanov, Han Cheng Lie, and T. J. Sullivan. "Γ -convergence of Onsager–Machlup functionals: I. With applications to maximum a posteriori estimation in Bayesian inverse problems." Inverse Problems 38, no. 2 (December 28, 2021): 025005. http://dx.doi.org/10.1088/1361-6420/ac3f81.

Full text
Abstract:
Abstract The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager–Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Γ-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
APA, Harvard, Vancouver, ISO, and other styles
50

Paulauskaite-Taraseviciene, Agne, Vaidas Jukavicius, Nerijus Morkevicius, Raimundas Jasinevicius, Vytautas Petrauskas, and Vygintas Kazanavicius. "Statistical Evaluation of Four Technologies used for Intelectualization of Smart Home Environment." Information Technology And Control 44, no. 3 (September 24, 2015): 334–44. http://dx.doi.org/10.5755/j01.itc.44.3.11965.

Full text
Abstract:
This paper addresses the issues of decision-making methods and their usage capabilities for intelligent control based on resident’s habits. Learning from the behaviour of the resident is essential for the system to adapt and provide intelligent control based on behaviour patterns. Different homes have different conditions and habits which have to be taken into account for the intelligent system to be useful. However, even deeply ingrained habits are subject to change over time. Therefore, an intelligent system has to respond to changing and diverse environment. Various decision-making methods have the potential of a number of benefits in providing intelligent control for the Smart home systems. In this paper, concurrent decision-making methods, including Artificial Neural Networks, Fuzzy Logic, Linear Programing and Bayesian, are employed with particular algorithms in order to provide control based on resident’s habits. These approaches are tested and compared within experimental scenarios for intelligent lightning control.DOI: http://dx.doi.org/10.5755/j01.itc.44.3.11965
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography