Journal articles on the topic 'Utility theory – Mathematical models'

To see the other types of publications on this topic, follow the link: Utility theory – Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Utility theory – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Servedio, Maria R., Yaniv Brandvain, Sumit Dhole, Courtney L. Fitzpatrick, Emma E. Goldberg, Caitlin A. Stern, Jeremy Van Cleve, and D. Justin Yeh. "Not Just a Theory—The Utility of Mathematical Models in Evolutionary Biology." PLoS Biology 12, no. 12 (December 9, 2014): e1002017. http://dx.doi.org/10.1371/journal.pbio.1002017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gasparian, Mikhail Samuilovich, Irina Anatolievna Kiseleva, Valery Alexandrovich Titov, and Natalia Alekseevna Sadovnikova. "St. Petersburg paradox: adoption of decisions on the basis of data mining and development of software in the sphere of business analytics." Nexo Revista Científica 34, no. 04 (October 28, 2021): 1370–80. http://dx.doi.org/10.5377/nexo.v34i04.12676.

Full text
Abstract:
This article is devoted to analysis of models of St. Petersburg paradox, as well as development of software in the sphere of business analysis. This work is based on mathematical models using theories of probability and games as well as expert survey method. It is demonstrated that the St. Petersburg paradox is a mathematical problem of probability theory with artificial conditions. The influence of this problem on economical theory is exemplified by such provisions as the principle of diminishing marginal utility, the use of expected utility as criterion of decision adoption in uncertain environment, as well as foundations of microeconomics of insurance and risk management, theory of games and some approaches to financial simulation. Adoption of decisions on the basis of the St. Petersburg paradox is analyzed. Review of main decisions of the St. Petersburg paradox and their influence for economic theory has confirmed that the St. Petersburg paradox as a mathematical problem can be used as mathematical model upon implementation of financial simulation. Comparative analysis of available BI solutions has confirmed that most of them propose all major functions, and significant differences can be revealed in penetration of expanded functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, S. H., and N. P. Suh. "Mathematical Foundations for Manufacturing." Journal of Engineering for Industry 109, no. 3 (August 1, 1987): 213–18. http://dx.doi.org/10.1115/1.3187121.

Full text
Abstract:
For the field of manufacturing to become a science, it is necessary to develop general mathematical descriptions for the analysis and synthesis of manufacturing systems. Standard analytic models, as used extensively in the past, are ineffective for describing the general manufacturing situation due to their inability to deal with discontinuous and nonlinear phenomena. These limitations are transcended by algebraic models based on set structures. Set-theoretic and algebraic structures may be used to (1) express with precision a variety of important qualitative concepts such as hierarchies, (2) provide a uniform framework for more specialized theories such as automata theory and control theory, and (3) provide the groundwork for quantitative theories. By building on the results of other fields such as automata theory and computability theory, algebraic structures may be used as a general mathematical tool for studying the nature and limits of manufacturing systems. This paper shows how manufacturing systems may be modeled as automatons, and demonstrates the utility of this approach by discussing a number of theorems concerning the nature of manufacturing systems. In addition symbolic logic is used to formalize the Design Axioms, a set of generalized decision rules for design. The application of symbolic logic allows for the precise formulation of the Axioms and facilitates their interpretation in a logical programming language such as Prolog. Consequently, it is now possible to develop a consultive expert system for axiomatic design.
APA, Harvard, Vancouver, ISO, and other styles
4

Assaf, Matheus, and Pedro Garcia Duarte. "Utility Matters." History of Political Economy 52, no. 5 (October 1, 2020): 863–94. http://dx.doi.org/10.1215/00182702-8671855.

Full text
Abstract:
The present-day standard textbook narrative on the history of growth theory usually takes Robert Solow’s 1956 contribution as a key starting point, which was extended by David Cass and Tjalling Koopmans in 1965 by introducing an intertemporal maximization problem that defines the saving ratio in the economy. However, the road connecting Solow to the Ramsey-Cass-Koopmans model is not so straightforward. We argue that in order to understand Koopmans’s contribution, we have to go to the activity analysis literature that started before Solow 1956 and never had him as a central reference. We stress the role played by Edmond Malinvaud, with whom Koopmans interacted closely, and take his travel from the French milieu of mathematical economics to the Cowles Commission in 1950-51 and back to France as a guiding line. The rise of turnpike theory in the end of the 1950s generated a debate on the choice criteria of growth programs, opposing the productive efficiency typical of these models to the utilitarian approach supported by Malinvaud and Koopmans. The Vatican Conference of 1963, where Koopmans presented a first version of his 1965 model, was embedded in this debate. We argue that Malinvaud’s (and Koopmans’s) contributions were crucial to steer the activity analysis literature toward a utilitarian analysis of growth paths.
APA, Harvard, Vancouver, ISO, and other styles
5

Jenkins, Porter, Ahmad Farag, J. Stockton Jenkins, Huaxiu Yao, Suhang Wang, and Zhenhui Li. "Neural Utility Functions." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7917–25. http://dx.doi.org/10.1609/aaai.v35i9.16966.

Full text
Abstract:
Current neural network architectures have no mechanism for explicitly reasoning about item trade-offs. Such trade-offs are important for popular tasks such as recommendation. The main idea of this work is to give neural networks inductive biases that are inspired by economic theories. To this end, we propose Neural Utility Functions, which directly optimize the gradients of a neural network so that they are more consistent with utility theory, a mathematical framework for modeling choice among items. We demonstrate that Neural Utility Functions can recover theoretical item relationships better than vanilla neural networks, analytically show existing neural networks are not quasi-concave and do not inherently reason about trade-offs, and that augmenting existing models with a utility loss function improves recommendation results. The Neural Utility Functions we propose are theoretically motivated, and yield strong empirical results.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, Asad, Osman Hasan, Falah Awwad, and Nabil Bastaki. "Formalization of Cost and Utility in Microeconomics." Energies 13, no. 3 (February 6, 2020): 712. http://dx.doi.org/10.3390/en13030712.

Full text
Abstract:
Cost and utility modeling of economics agents based on the differential theory is fundamental to the analysis of the microeconomics models. In particular, the first and second-order derivative tests are used to specify the desired properties of the cost and utility models. Traditionally, paper-and-pencil proof methods and computer-based tools are used to investigate the mathematical properties of these models. However, these techniques do not provide an accurate analysis due to their inability to exhaustively specify and verify the mathematical properties of the cost and utility models. Additionally, these techniques cannot accurately model and analyze pure continuous behaviors of the economic agents due to the utilization of computer arithmetic. On the other hand, an accurate analysis is direly needed in many safety and cost-critical microeconomics applications, such as agriculture and smart grids. To overcome the issues pertaining to the above-mentioned techniques, in this paper, we propose a theorem proving based methodology to formally analyze and specify the mathematical properties of functions used in microeconomics modeling. The proposed methodology is primarily based on a formalization of the derivative tests and root analysis of the polynomial functions, within the sound core of the HOL-Light theorem prover. We also provide a formalization of the first-order condition, which is used to analyze the maximum of the profit function in a higher-order-logic theorem prover. We then present the formal analysis of the utility, cost and first-order condition based on the polynomial functions. To illustrate the usefulness of proposed formalization, the proposed formalization is used to formally analyze and verify the quadratic cost and utility functions, which have been used in an optimal power flow problem and demand response (DR) program, respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Fumagalli, Roberto. "THE FUTILE SEARCH FOR TRUE UTILITY." Economics and Philosophy 29, no. 3 (October 15, 2013): 325–47. http://dx.doi.org/10.1017/s0266267113000291.

Full text
Abstract:
In traditional decision theory, utility is regarded as a mathematical representation of preferences to be inferred from agents’ choices. In the recent literature at the interface between economics, psychology and neuroscience, several authors argue that economists could develop more predictive and explanatory models by incorporating insights concerning individuals’ hedonic experiences. Some go as far as to contend that utility is literally computed by specific neural areas and urge economists to complement or substitute their notion of utility with some neuro-psychological construct. In this paper, I distinguish three notions of utility that are frequently mentioned in debates about decision theory and examine some critical issues regarding their definition and measurability. Moreover, I provide various empirical and conceptual reasons to doubt that economists should base decision theoretic analyses on some neuro-psychological notion of utility.
APA, Harvard, Vancouver, ISO, and other styles
8

Izsák, Andrew, Erik Jacobson, Zandra de Araujo, and Chandra Hawley Orrill. "Measuring Mathematical Knowledge for Teaching Fractions With Drawn Quantities." Journal for Research in Mathematics Education 43, no. 4 (July 2012): 391–427. http://dx.doi.org/10.5951/jresematheduc.43.4.0391.

Full text
Abstract:
Researchers have recently used traditional item response theory (IRT) models to measure mathematical knowledge for teaching (MKT). Some studies (e.g., Hill, 2007; Izsák, Orrill, Cohen, & Brown, 2010), however, have reported subgroups when measuring middle-grades teachers' MKT, and such groups violate a key assumption of IRT models. This study investigated the utility of an alternative called the mixture Rasch model that allows for subgroups. The model was applied to middle-grades teachers' performance on pretests and posttests bracketing a 42-hour professional development course focused on drawn models for fraction arithmetic. Results from psychometric modeling and evidence from video-recorded interviews and professional development sessions suggested that there were 2 subgroups of middle-grades teachers, 1 better able to reason with 3-level unit structures and 1 constrained to 2-level unit structures. Some teachers, however, were easier to classify than others.
APA, Harvard, Vancouver, ISO, and other styles
9

Peng, Yichen, Jing Zhou, Qiang Xu, and Xiaoling Wu. "Cost Allocation in PPP Projects: An Analysis Based on the Theory of “Contracts as Reference Points”." Discrete Dynamics in Nature and Society 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/158765.

Full text
Abstract:
In recent years, the demand for infrastructure has been largely driven by the economic development of many countries. PPP has proved to be an efficient way to draw private capital into public utility construction, where ownership allocation becomes one of the most important clauses to both the government and the private investor. In this paper, we establish mathematical models to analyze the equity allocation problem of PPP projects through a comparison of the models with and without the effects of the theory of “contracts as reference points.” We then derive some important conclusions from the optimal solution of the investment ratio.
APA, Harvard, Vancouver, ISO, and other styles
10

Ilyina, Elena A., and Leonid A. Saraev. "On the theory of optimization of transaction costs of multifactor manufacturing enterprises." Vestnik of Samara University. Economics and Management 12, no. 4 (December 30, 2021): 182–94. http://dx.doi.org/10.18287/2542-0461-2021-12-4-182-194.

Full text
Abstract:
The published article proposes mathematical models for calculating the optimal profit of multifactorial manufacturing enterprises that incur both production (transformational) and certain non-production (transactional) costs, the sources of which may be forced costs of searching for economic information, measuring the parameters of various goods, negotiating and the conclusion of contracts, for the development of specifications and the protection of property rights, for the opportunistic behavior of employees and managers of the enterprise, etc. Anumerical analysis of the presented models for calculating the optimal profit of multifactor enterprises that bear transaction costs shows the unattainability of the maximum possible profit values, since in practice the enterprise management maximizes not the profit itself, but its utility, expressed in the form of the corresponding transaction function.
APA, Harvard, Vancouver, ISO, and other styles
11

Kaplan, Ryan, Joseph Klobušický, Shivendra Pandey, David H. Gracias, and Govind Menon. "Building Polyhedra by Self-Assembly: Theory and Experiment." Artificial Life 20, no. 4 (October 2014): 409–39. http://dx.doi.org/10.1162/artl_a_00144.

Full text
Abstract:
We investigate the utility of a mathematical framework based on discrete geometry to model biological and synthetic self-assembly. Our primary biological example is the self-assembly of icosahedral viruses; our synthetic example is surface-tension-driven self-folding polyhedra. In both instances, the process of self-assembly is modeled by decomposing the polyhedron into a set of partially formed intermediate states. The set of all intermediates is called the configuration space, pathways of assembly are modeled as paths in the configuration space, and the kinetics and yield of assembly are modeled by rate equations, Markov chains, or cost functions on the configuration space. We review an interesting interplay between biological function and mathematical structure in viruses in light of this framework. We discuss in particular: (i) tiling theory as a coarse-grained description of all-atom models; (ii) the building game—a growth model for the formation of polyhedra; and (iii) the application of these models to the self-assembly of the bacteriophage MS2. We then use a similar framework to model self-folding polyhedra. We use a discrete folding algorithm to compute a configuration space that idealizes surface-tension-driven self-folding and analyze pathways of assembly and dominant intermediates. These computations are then compared with experimental observations of a self-folding dodecahedron with side 300 μm. In both models, despite a combinatorial explosion in the size of the configuration space, a few pathways and intermediates dominate self-assembly. For self-folding polyhedra, the dominant intermediates have fewer degrees of freedom than comparable intermediates, and are thus more rigid. The concentration of assembly pathways on a few intermediates with distinguished geometric properties is biologically and physically important, and suggests deeper mathematical structure.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Cui-Hua, Peng Xing, and Jin Li. "Optimal Strategy of Social Responsibility and Quality Effort in Service Supply Chain with Quality Preference." Asia-Pacific Journal of Operational Research 35, no. 03 (May 31, 2018): 1850018. http://dx.doi.org/10.1142/s0217595918500185.

Full text
Abstract:
We investigate the optimal strategy of service supply chain (SSC) including one integrator and two suppliers under a two-layer game structure. Service integrator decides social responsibility and service price, while the two service suppliers with quality preference determine their quality efforts, respectively. By analyzing the two-layer game structure and eight different scenarios of decision models (i.e., CD, DD, ICD, IDD, ISD, SCD, SDD, and SSD), we establish members’ utility functions under different decision models. Meanwhile, based on game theory, the optimal strategies of SSC are obtained. Mathematical reasoning and numerical simulations show that, firstly, quality preference has impact on optimal strategy and members’ utilities under different constraints. Secondly, utility of supply chain with integrator as a leader is greater than the case with suppliers as the leader.
APA, Harvard, Vancouver, ISO, and other styles
13

Lapshina, M. L., O. O. Lukina, and D. D. Lapshin. "Using mathematical models in a disequilibrium economy with offsetting demand." Proceedings of the Voronezh State University of Engineering Technologies 82, no. 1 (May 15, 2020): 369–79. http://dx.doi.org/10.20914/2310-1202-2020-1-369-379.

Full text
Abstract:
When modeling a nonequilibrium economy, the behavior of participants is described by the same optimization problems, including the criterion and internal technological and budgetary constraints, as in the theory of Walrasian equilibrium. They are only supplemented by external restrictions on the purchase (or sale) of scarce (slow-moving) products. Various principles are known for establishing these boundaries. They can be fixed (a rigid scheme of rationing) and not depend directly on the decisions of the participant, or be determined by the demand expressed by them (flexible scheme). The presented demand for rationable products, as a rule, does not coincide with the Walrasian one. We will call it an order. In well-known models, the order is considered equal to active demand. The concept of active demand has been successfully used in price control models. However, it is not the object of the choice of participants aimed at optimizing their criteria. Meanwhile, it seems natural that manufacturers and consumers, seeking to maximize utility, are free to choose order sizes at their own discretion. Modeling of the situation arising with this approach is the goal of the present work and is based on a modification of the rationing scheme proposed by J.P. Benassi The work also considers equilibrium models at fixed prices, in which participants, when forming demand, take into account the scarcity of products and the level of satisfaction of orders. Models are used to assess the impact of taxes, government spending, and other macro-regulators on employment and national income. The paper provides an overview of literary sources in the subject area, as well as an economic interpretation of the results.
APA, Harvard, Vancouver, ISO, and other styles
14

Baxter, Gareth J., Richard A. Blythe, William Croft, and Alan J. McKane. "Modeling language change: An evaluation of Trudgill's theory of the emergence of New Zealand English." Language Variation and Change 21, no. 2 (July 2009): 257–96. http://dx.doi.org/10.1017/s095439450999010x.

Full text
Abstract:
AbstractTrudgill (2004) proposed that the emergence of New Zealand English, and of isolated new dialects generally, is purely deterministic. It can be explained solely in terms of the frequency of occurrence of particular variants and the frequency of interactions between different speakers in the society. Trudgill's theory is closely related to usage-based models of language, in which frequency plays a role in the representation of linguistic knowledge and in language change. Trudgill's theory also corresponds to a neutral evolution model of language change. We use a mathematical model based on Croft's usage-based evolutionary framework for language change (Baxter, Blythe, Croft, & McKane, 2006), and investigate whether Trudgill's theory is a plausible model of the emergence of new dialects. The results of our modeling indicate that determinism cannot be a sufficient mechanism for the emergence of a new dialect. Our approach illustrates the utility of mathematical modeling of theories and of empirical data for the study of language change.
APA, Harvard, Vancouver, ISO, and other styles
15

Ran, Zhi-Yong, and Bao-Gang Hu. "Parameter Identifiability in Statistical Machine Learning: A Review." Neural Computation 29, no. 5 (May 2017): 1151–203. http://dx.doi.org/10.1162/neco_a_00947.

Full text
Abstract:
This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrating recent progress. First, we review criteria for determining the parameter structure of models from the literature. This has three related issues: parameter identifiability, parameter redundancy, and reparameterization. Second, we review the deep influence of identifiability on various aspects of machine learning from theoretical and application viewpoints. In addition to illustrating the utility and influence of identifiability, we emphasize the interplay among identifiability theory, machine learning, mathematical statistics, information theory, optimization theory, information geometry, Riemann geometry, symbolic computation, Bayesian inference, algebraic geometry, and others. Finally, we present a new perspective together with the associated challenges.
APA, Harvard, Vancouver, ISO, and other styles
16

Beskorovainyi, Vladimir V., Lubomyr B. Petryshyn, and Vasyl О. Honcharenko. "Mathematical models of a multi-criteria problem of reengineering topological structures of ecological monitoring networks." Applied Aspects of Information Technology 5, no. 1 (April 17, 2022): 11–24. http://dx.doi.org/10.15276/aait.05.2022.1.

Full text
Abstract:
The article deals with the theoretical aspects of the problem of reengineering the topological structures of terrestrial ecological monitoring networks. As a result of the analysis of the current state of the problem, it was revealed the need to change the network of monitoring points, to increase requirements for the efficiency and accuracy of observations, as well as to do the more advanced technologies for collecting, processing, storing and transmitting information. All this is possible due to reengineering of existing monitoring networks. This requires the improvement of network system optimization technologies and their software in terms of taking into account the features of the reengineering problem, as well as the mathematical models and methods used for this. To solve the problem of reengineering of terrestrial monitoring networks, an aggregative-decomposition approach is proposed. The approach is divided into a set of tasks considering their interconnections in terms of input and output data. This made it possible to define a set of tasks that form the basis of reengineering procedures. To increase the efficiency of technologies for computer-aided design and reengineering of networks, a set of mathematical models is proposed that covers the main stages of their life cycles. The article discusses: a systemological model of iterative technology for obtaining design solutions; analytical models for evaluating the properties of network reengineering options in terms of efficiency, reliability, survivability and costs; models for identifying effective options for network reengineering based on Karlin and Germeier theorems; a model for evaluating the local properties of options in the form of a utility function of local criteria; model of scalar multicriteria estimation of network reengineering options based on utility theory. The utility function makes it possible to implement both linear and non-linear (including Z- and S-shaped) dependencies on their values. For the practical implementation of models of multicriteria problems of reengineering of topological structures of networks, it is proposed to use the method of generation of effective design solutions in parallel with the generation and the method of comparator parametric synthesis of the scalar multicriteria estimation function. The performance and efficiency of the proposed mathematical models and methods are demonstrated by examples of solving the problems of subset selection of Paretooptimal options for building networks and parametric synthesis of the scalar multicriteria estimation function. The application in practice of the proposed set of models and methods will increase the degree of automation of network reengineering processes, reduce the time for solving the problem of multi-criteria choice due to the reduction in the time complexity of the analysis procedures, and increase the stability of the decisions made by compromising their choice only from a subset of effective ones.
APA, Harvard, Vancouver, ISO, and other styles
17

Petrova, Irina, Olga Shikulskaya, and Mikhail Shikulskiy. "Conceptual Modeling Methodology of Multifunction Sensors on the Basis of a Fractal Approach." Advanced Materials Research 875-877 (February 2014): 951–56. http://dx.doi.org/10.4028/www.scientific.net/amr.875-877.951.

Full text
Abstract:
The problems of the multifunction sensors development make inefficient of design traditional methods application. Work purpose is an effective approach creation to the analysis and synthesis of sensors physical functional principle and creation on its basis mathematical, algorithmic and informational software for multifunction quantity sensors conceptual design automation. Authors propose the problem solution method based on fractal approach application to the theory of energy and information circuit models. For this purpose the physical functional principle sensor modeling fractal concept and mechanisms of its realization are created. This work results are new computer technique, software for the automatic synthesis of the multifunction sensors physical functional principle and two new multifunction sensors utility models automatically generated by developed software.
APA, Harvard, Vancouver, ISO, and other styles
18

Sohrabi, Arya, Mir Saman Pishvaee, Ashkan Hafezalkotob, and Shahrooz Bamdad. "A multi-attribute model to optimize the price and composition of prepaid mobile Internet plans." Journal of Enterprise Information Management 33, no. 5 (July 16, 2020): 1257–91. http://dx.doi.org/10.1108/jeim-09-2019-0279.

Full text
Abstract:
PurposePrepaid mobile Internet is one of the most profitable services that are composed of multiple attributes. The overall utility of Internet service can be broken down into the sum of the utility of individual attribute levels. Based on the multi-attribute theory, rational consumers choose the service that yields the highest utility from a number of possible alternatives. Determining the optimal attribute levels that satisfy consumers' preferences and maximize the total revenue of the firm is a challenging multi-attribute decision problem for any mobile operator. When designing mobile Internet services, adopting a robust composition of services against different realizations of competitors' strategies can bring advantages for network operators. The purpose of this study is to determine the optimal attribute levels of prepaid mobile Internet packages with the aim of maximizing the total revenue of the firm by considering the paradigms of multi-attribute utility theory about consumer choices and the issue of uncertainty in counterpart services offered by the competitors.Design/methodology/approachThis paper formulates the problem of multi-attribute pricing and design of mobile Internet plans in a competitive environment by developing deterministic and robust scenario-based mathematical models and considering the paradigms of multi-attribute utility theory about consumer choices. The proposed robust scenario-based models are based on three different paradigms, including maximizing expected revenue, minimizing the negative deviation from expected revenue and minimizing the maximum regret. A comprehensive numerical analysis is conducted to evaluate and compare the efficiency of the proposed models.FindingsThe evaluations reveal that deploying recourse policy can result in higher revenue for the firm when facing uncertainty. By doing sensitivity analysis, this paper shows that consumer preferences for brand attribute and consumers' purchase frequency can influence the revenue of network operators.Originality/valueThis paper develops a novel deterministic multi-attribute product line design (PLD) model to address the problem of determining the price and composition of prepaid mobile Internet plans. Furthermore, the issue of uncertainty in counterpart services offered by the competitors is studied for the first time in the PLD literature.
APA, Harvard, Vancouver, ISO, and other styles
19

Manbir Kaur and Inderdeep Singh. "Comprehensive review of numerical schemes based on Hermite wavelets." World Journal of Advanced Research and Reviews 15, no. 3 (September 30, 2022): 240–47. http://dx.doi.org/10.30574/wjarr.2022.15.3.0908.

Full text
Abstract:
Differential and integral equations are encountered in many applications of science and engineering and many mathematical models have also been formulated in terms of these equations. Due to some shortcomings of the already existing numerical methods, researchers are making efforts to find more efficient alternatives for obtaining solutions of many practical and physical problems giving rise to differential or integral equations. As a result, wavelet methods have found their way for the numerical solution of the resulting different kinds of equations. So this review paper intends to provide the great utility, accuracy and employability of Hermite wavelets to address situations of various areas of applied mathematics, physics, biology, optimal control systems, communication theory, queuing theory, medicine and many other scientific and engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
20

Beskorovainyi, Vladimir, and Oksana Draz. "MATHEMATICAL MODELS OF DECISION SUPPORT IN THE PROBLEMS OF LOGISTICS NETWORKS OPTIMIZATION." Innovative Technologies and Scientific Solutions for Industries, no. 4 (18) (December 10, 2021): 5–14. http://dx.doi.org/10.30837/itssi.2021.18.005.

Full text
Abstract:
The subject of research in the article is the process of decision support in the problems of logistics networks optimization. The goal of the work is to develop a set of mathematical models of logistics network optimization problems to increase the efficiency of decision support systems by coordinating the interaction between automatic and interactive procedures of computer-aided design systems. The following tasks are solved in the article: review and analysis of the current state of the problem of decision support in the problems of logistics networks optimization; decomposition of the problem of decision support for the optimization of logistics networks; development of a mathematical model of the general problem of network optimization in terms of economy, efficiency, reliability and survivability; development of a set of technological mathematical models for the correct reduction of many effective options for building logistics networks for the final choice, taking into account difficult to formalize factors, knowledge and experience of the decision maker (DM). The following methods are used: systems theory, utility theory, optimization and operations research. Results. Analysis of the current state of the problem of logistics networks optimization has established the existence of the problem of correct reduction of a subset of effective options for their construction for ranking, taking into account difficult to formalize factors, as well as knowledge and experience of DM. The decomposition of the problem into tasks is performed: definition of the principles of network construction; network structure selection; determination of the topology of network elements; choice of network operation technology; determination of parameters of elements and communications (means of cargo delivery); multi criteria evaluation and selection of the best option for building a network. A mathematical model of the general problem of network optimization in terms of economy, efficiency, reliability and survivability is proposed. To coordinate the interaction between automatic and interactive network optimization procedures, it is proposed to use a combined method of ranking options, which allows you to identify and correctly reduce the subset of effective options for ranking DM. To implement the method, mathematical models of problems of the procedure of ranking options in the technologies of project decision support have been developed, which allow to combine the advantages of the technologies of the ordinalistic and cardinalistic approaches. Conclusions. The developed set of mathematical models expands the methodological bases of automation of processes of support of multi criteria decisions on optimization of logistic networks, allows to carry out correct reduction of set of effective options of their construction for the final choice taking into account factors, knowledge and experience of DM. The practical use of the proposed models and procedures will reduce the time and capacity complexity of decision support technologies, and through the use of the proposed selection procedures - to improve their quality across a variety of functional and cost indicators.
APA, Harvard, Vancouver, ISO, and other styles
21

Antipina, Natalya. "Intertemporal Optimization Model of Entrepreneurs Behavior." Bulletin of Baikal State University 31, no. 2 (July 9, 2021): 216–20. http://dx.doi.org/10.17150/2500-2759.2021.31(2).216-220.

Full text
Abstract:
The intertemporal problem of consumer’s behavior is the basis of modern models. The interest in this kind of problems is determined by the attempt to widen the range of directions within which it is possible to conduct additional mathematical research in the theory of consumption. The article considers the problem of maximizing discounted utility derived from an entrepreneur’s consumption due to optimal allocation of monetary means which he gets as profit from his production company and interest on assets. The difference of this problem from the basic dynamic problem of consumer’s behavior lies in the fact that an entrepreneur as an individual acts in two roles: as a consumer and as a manufacturer. Furthermore, the problem is characterized by two peculiarities: a distinctive budget limitation which includes production function and reveals an irregular differential relation and also by the presence of mixed boundary conditions on the value of capital and assets. Formalization of the problem as a dynamic optimization model is given. It was studied with the use of mathematical analysis and the means of the optimal control theory. According to parameter correlations of the model, two strategies were identified which can be recommended for an entrepreneur as the most optimal ones. The model that was developed in the course of research can serve as a tool for taking decisions because it suggests optimal strategies of allocation of financial means in an enterprise which leads to maximization of consumption utility.
APA, Harvard, Vancouver, ISO, and other styles
22

Riesgo, Laura, and José A. Gómez-Limón. "Análisis de escenarios de políticas para la gestión pública de la agricultura de regadío." Economía Agraria y Recursos Naturales 5, no. 9 (October 19, 2011): 81. http://dx.doi.org/10.7201/earn.2005.09.04.

Full text
Abstract:
In this paper we present a methodological approach to analyze the combination of different agricultural policy and irrigation water pricing alternatives. For this purpose we take into account that farmers consider a broad set of criteria at the same time when making decisions. Thus, policy scenario simulations are done trough multi-criteria mathematical programming models capable to simulate farmers’ future behaviour. For this purpose we have opted for models developed within the Multi-Attribute Utility Theory (MAUT). It is also worth noting that results obtained from the simulation models are not only related with farmers’ decision variables (crop mixes). A set of relevant economic, social and environmental attributes related to public objectives are also obtained as a way of measuring the efficiency of the policy scenarios proposed. The results obtained show the usefulness of this methodological approach to evaluate the combined pressures and impacts of both policies.
APA, Harvard, Vancouver, ISO, and other styles
23

Welch, Stephen M., Zhanshan Dong, Judith L. Roe, and Sanjoy Das. "Flowering time control: gene network modelling and the link to quantitative genetics." Australian Journal of Agricultural Research 56, no. 9 (2005): 919. http://dx.doi.org/10.1071/ar05155.

Full text
Abstract:
Flowering is a key stage in plant development that initiates grain production and is vulnerable to stress. The genes controlling flowering time in the model plant Arabidopsis thaliana are reviewed. Interactions between these genes have been described previously by qualitative network diagrams. We mathematically relate environmentally dependent transcription, RNA processing, translation, and protein–protein interaction rates to resultant phenotypes. We have developed models (reported elsewhere) based on these concepts that simulate flowering times for novel A. thaliana genotype–environment combinations. Here we draw 12 contrasts between genetic network (GN) models of this type and quantitative genetics (QG), showing that both have equal contributions to make to an ideal theory. Physiological dominance and additivity are examined as emergent properties in the context of feed-forwards networks, an instance of which is the signal-integration portion of the A. thaliana flowering time network. Additivity is seen to be a complex, multi-gene property with contributions from mass balance in transcript production, the feed-forwards structure itself, and downstream promoter reaction thermodynamics. Higher level emergent properties are exemplified by critical short daylength (CSDL), which we relate to gene expression dynamics in rice (Oryza sativa). Next to be discussed are synergies between QG and GN relating to the quantitative trait locus (QTL) mapping of model coefficients. This suggests a new verification test useful in GN model development and in identifying needed updates to existing crop models. Finally, the utility of simple models is evinced by 80 years of QG theory and mathematical ecology.
APA, Harvard, Vancouver, ISO, and other styles
24

Kujawski, Edouard. "How adjusting elicited health utilities after the fact can adversely affect shared decision making." F1000Research 11 (December 19, 2022): 1533. http://dx.doi.org/10.12688/f1000research.128862.1.

Full text
Abstract:
Background: The elicitation of inconsistent health-state utility values (HSUVs) is a prevalent problem. There are two approaches to address this problem: (1) intervention during the elicitation process to ensure that patients estimate consistent HSUVs; (2) no intervention during the elicitation process and inconsistent HSUVs are adjusted after the fact. This paper studies three models recently proposed for adjusting inconsistent HSUVs and consistent HSUVs that some may consider unrealistic. Analysis: The three models are analyzed using a sound theoretical framework: the mathematical equivalence of HSUVs elicited using the standard gamble and probabilities, the Fréchet bounds, and preference theory. It is proven that none of these models accounts for the Fréchet lower bound and health conditions that are preference substitutes. Results: A clinical vignette proves these models may recommend treatments that result in premature death over treatments that cause acceptable adverse effects. Conclusions: The three models are incorrect and may mislead patients and physicians to poor medical decisions. In the spirit of shared decision making, patients should be given the opportunity to reassess inconsistent HSUVs and confirm that the revised HSUVs reflect their preferences.
APA, Harvard, Vancouver, ISO, and other styles
25

Bezkorovainyi, Volodymyr, Leonid Nefedov, and Vladimir Russkin. "Mathematical model of structural and topological optimization of logistics networks." Bulletin of Kharkov National Automobile and Highway University, no. 95 (December 16, 2021): 178. http://dx.doi.org/10.30977/bul.2219-5548.2021.95.0.178.

Full text
Abstract:
The subject of research in the article is the topological structures of closed-loop logistics networks. The goal of the article is to increase the efficiency of centralized logistics networks by developing a mathematical model for a two-criteria problem of optimizing topological structures in the process of their reengineering. The article solves the following tasks: analysis of the current state of the problem of structural and topological optimization of logistics networks; formalization of the problem of optimization of logistics networks as geographically distributed objects; synthesis of objective functions of the mathematical model of a two-criterion optimization problem for centralized three-level topological structures of closed logistics networks at the reengineering stage; development of a system of constraints of the mathematical model of the problem of optimizing centralized three-level topological structures of closed logistics networks; a function for evaluating the overall utility of options based on the Kolmogorov-Gabor polynomial is offered. The following methods are used: methods of systems theory, methods of utility theory, optimization and operations research. The following results were obtained: the analysis of the current state of the problem of system optimization of logistics networks, mathematical models and methods for its solution was carried out; formalization of the problem of structural and topological optimization of logistics networks as geographically distributed objects; a mathematical model of a two-criterion task of reengineering of three-level topological structures of logistics networks in terms of costs and efficiency with integrated points of production and processing has been developed (originality). Conclusions: Based on the results of the analysis of the problem of optimizing the topological structures of logistics systems, it has been established that the problems of direct and reverse logistics are still considered as conditionally independent, which does not allow obtaining effective global solutions. In the context of expanding the network of consumers, changes in delivery volumes, the introduction of environmental restrictions, it is proposed to reengineer the networks, which provides for their radical redesign. The formulated statement and the developed mathematical model of a two-criterion (in terms of cost and efficiency) optimization problem for three-level topological structures for combined production and processing points will increase the efficiency of logistics networks with reverse flows by reducing the cost of reengineering (practical value).
APA, Harvard, Vancouver, ISO, and other styles
26

Kavrakov, I., D. Legatiuk, K. Gürlebeck, and G. Morgenthal. "A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks." Royal Society Open Science 6, no. 3 (March 2019): 181848. http://dx.doi.org/10.1098/rsos.181848.

Full text
Abstract:
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary. This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterization is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Ginevičius, Romualdas, and Algirdas Krivka. "APPLICATION OF GAME THEORY FOR DUOPOLY MARKET ANALYSIS." Journal of Business Economics and Management 9, no. 3 (September 30, 2008): 207–17. http://dx.doi.org/10.3846/1611-1699.2008.9.207-217.

Full text
Abstract:
The paper provides the analysis of game theory models application to identify duopoly market equilibrium (quantities sold and market prices), to evaluate and compare the results of enterprises in a market. The purpose of the analysis is to determine to what extent theoretical models correspond to real life, that is how reliable they are in supporting and estimating decisions of duopoly companies, fortifying market prices and quantities sold, evaluating company's competing positions and possibilities for decision co‐ordination. To describe discrete strategies equilibrium the “Prisoner's Dilemma” model is applied to a hypothetic market entrance game with possible side payments. Further analysis of the market entrance game incorporates mixed strategies based “Matching Pennies” model in case discrete strategies equilibrium does not exist. Continuous strategies are described analyzing hypothetic duopoly by applying Cournot, Stackelberg and Bertrand models. The first and the second mover advantage issues are raised comparing outcomes of dynamic Stackelberg and Bertrand games for a leader and a follower. Stability and utility of cartel agreement for its participants is mathematically supported with the help of a multi‐step repeated Cournot game. Having described, compared and applied the main game theory models to artificial duopoly market situations, the author passes over to the comparative analysis of the models’ weaknesses and problems related to their practical application.
APA, Harvard, Vancouver, ISO, and other styles
28

Petrov, Konstantin, Igor Kobzev, Oleksandr Orlov, Victor Kosenko, Alisa Kosenko, and Yana Vanina. "Devising a method for identifying the model of multi-criteria expert estimation of alternatives." Eastern-European Journal of Enterprise Technologies 4, no. 3(112) (August 31, 2021): 56–65. http://dx.doi.org/10.15587/1729-4061.2021.238020.

Full text
Abstract:
An approach to constructing mathematical models of individual multicriterial estimation was proposed based on information about the ordering relations established by the expert for a set of alternatives. Structural identification of the estimation model using the additive utility function of alternatives was performed within axiomatics of the multi-attribute utility theory (MAUT). A method of parametric identification of the model based on the ideas of the theory of comparative identification has been developed. To determine the model parameters, it was proposed to use the midpoint method that has resulted in the possibility of obtaining a uniform stable solution of the problem. It was shown that in this case, the problem of parametric identification of the estimation model can be reduced to a standard linear programming problem. The scalar multicriterial estimates of alternatives obtained on the basis of the synthesized mathematical model make it possible to compare them among themselves according to the degree of efficiency and, thus, choose "the best" or rank them. A significant advantage of the proposed approach is the ability to use only non-numerical information about the decisions already made by experts to solve the problem of identifying the model parameters. This enables partial reduction of the degree of expert’s subjective influence on the outcome of decision-making and reduces the cost of the expert estimation process. A method of verification of the estimation model based on the principles of cross-validation has been developed. The results of computer modeling were presented. They confirmed the effectiveness of using the proposed method of parametric model identification to solve problems related to automation of the process of intelligent decision making.
APA, Harvard, Vancouver, ISO, and other styles
29

Bland, James R. "Measuring and Comparing Two Kinds of Rationalizable Opportunity Cost in Mixture Models." Games 11, no. 1 (December 19, 2019): 1. http://dx.doi.org/10.3390/g11010001.

Full text
Abstract:
In experiments of decision-making under risk, structural mixture models allow us to take a menu of theories about decision-making to the data, estimating the fraction of people who behave according to each model. While studies using mixture models typically focus only on how prevalent each of these theories is in people’s decisions, they can also be used to assess how much better this menu of theories organizes people’s utility than does just one theory on its own. I develop a framework for calculating and comparing two kinds of rationalizable opportunity cost from these mixture models. The first is associated with model mis-classification: How much worse off is a decision-maker if they are forced to behave according to model A, when they are in fact a model B type? The second relates to the mixture model’s probabilistic choice rule: How much worse off are subjects because they make probabilistic, rather than deterministic, choices? If the first quantity dominates, then one can conclude that model a constitutes an economically significant departure from model B in the utility domain. On the other hand, if the second cost dominates, then models a and B have similar utility implications. I demonstrate this framework on data from an existing experiment on decision-making under risk.
APA, Harvard, Vancouver, ISO, and other styles
30

Kulkarni, Ankur Arun. "Game theory for best results in academics." Journal of Engineering Education Transformations 35, no. 4 (April 1, 2022): 157–62. http://dx.doi.org/10.16920/jeet/2022/v35i4/22115.

Full text
Abstract:
Abstract: Game Theory (GT) is a technique or tool for analyzing the problems for studying and evolving the strategies, for reaching out towards the rational decision processes of individual persons and their interactions in an environment of a group. Game Theory focuses on studying the different approaches, in which strategic interactions among different economic agents evolve outcomes with respect to the choices (or utility) of the agents, irrespective of whether the results of their efforts were intended. Thus, mostly GT is used for a study of different mathematical models of variations, uncertainties, conflicts and interactions among intelligent; rationally deciding subjects i.e. human beings. In teaching learning process large number of students, faculties are involved. This results in large number of variables. Large number of students with unpredictable behaviors, number of faculties with different methods of teaching, makes it difficult to achieve the best academic results. So a study is undertaken and a game theory (GT) is adapted to evolve an academic system so that academic process becomes interesting for the participants that are the students and faculty. The basic reason of choosing a GT is as it allows users to consider the different variations in the chosen model. The objective of this study is to evolve a composite optimum strategy for the faculty member in-order to achieve the best possible result for any fragmented group of students. The analysis is carried out at the level of some subject in which maximum use of mathematics is observed, such as economics and financial analysis and management. Key words: game theory, strategy, risk, career, decision, higher education, fusion, check and balance
APA, Harvard, Vancouver, ISO, and other styles
31

Ferrara, Philippe, and Edgar Jacoby. "Evaluation of the utility of homology models in high throughput docking." Journal of Molecular Modeling 13, no. 8 (May 9, 2007): 897–905. http://dx.doi.org/10.1007/s00894-007-0207-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Körtesi, Péter, Zsolt Simonka, Zsuzsanna Katalin Szabo, Jan Guncaga, and Ramona Neag. "Challenging Examples of the Wise Use of Computer Tools for the Sustainability of Knowledge and Developing Active and Innovative Methods in STEAM and Mathematics Education." Sustainability 14, no. 20 (October 11, 2022): 12991. http://dx.doi.org/10.3390/su142012991.

Full text
Abstract:
The rapid changes in information and communication technology (ICT), the increasing availability of processing power, and the complexity of mathematical software demand a radical re-thinking of science, technology, engineering, arts, and mathematics (STEAM), as well as mathematics education. In the transition to technology-based classrooms, the constant use of educational software is a requirement for sustainable STEAM and mathematics education. This software supports a collaborative and actionable learning environment, develops 21st-century skills, and promotes the adoption of active and innovative methodologies. This paper focuses on learning and teaching mathematics and analyzes the role and utility of ICT tools in education as computer algebra systems (CAS) and dynamic geometry systems (DGS) in implementing active and innovative teaching methodologies related to sustainable STEAM education. Likewise, it highlights the necessity for learners to have extensive knowledge of mathematical theory, an essential asset to ensure the reliable and effective use of mathematical software. Through a practical experiment, this study aims to highlight that a mixed teaching method can significantly improve the sustainability of math knowledge. It provides various solid examples of CAS and DGS applications to emphasize its usage rooted in a mathematical background to enable learners to identify when the computer solution is unreliable. The study highlights that the proper use of CAS and DGS is an efficient method of deepening our understanding of mathematical notions and solving tasks in STEAM subjects and real-life applications. This paper’s goal is to direct our attention to the proper and intelligent use of computer tools, especially symbolic calculators, such as CAS and DGS, without providing an in-depth analysis of the challenges of these technologies. The outcomes of the paper should offer educators and learners new elements of active strategies and innovative learning models that can be immediately applied in education.
APA, Harvard, Vancouver, ISO, and other styles
33

Fujimoto, Ken A. "The Bayesian Multilevel Trifactor Item Response Theory Model." Educational and Psychological Measurement 79, no. 3 (November 17, 2018): 462–94. http://dx.doi.org/10.1177/0013164418806694.

Full text
Abstract:
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the aspects of another source (i.e., a nested method–source interaction). For this study, then, a Bayesian IRT model is proposed, one that accounts for such interaction among method sources while controlling for the clustering of individuals within the sample. The proposed model accomplishes these tasks by specifying a multilevel trifactor structure for the latent trait space. Details of simulations are also reported. These simulations demonstrate that this model can identify when item response data represent a multilevel trifactor structure, and it does so in data from samples as small as 250 cases nested within 50 clusters. Additionally, the simulations show that misleading estimates for the item discriminations could arise when the trifactor structure reflected in the data is not correctly accounted for. The utility of the model is also illustrated through the analysis of empirical data.
APA, Harvard, Vancouver, ISO, and other styles
34

Kariya, Yoshiaki, Masashi Honma, Keita Tokuda, Akihiko Konagaya, and Hiroshi Suzuki. "Utility of constraints reflecting system stability on analyses for biological models." PLOS Computational Biology 18, no. 9 (September 9, 2022): e1010441. http://dx.doi.org/10.1371/journal.pcbi.1010441.

Full text
Abstract:
Simulating complex biological models consisting of multiple ordinary differential equations can aid in the prediction of the pharmacological/biological responses; however, they are often hampered by the availability of reliable kinetic parameters. In the present study, we aimed to discover the properties of behaviors without determining an optimal combination of kinetic parameter values (parameter set). The key idea was to collect as many parameter sets as possible. Given that many systems are biologically stable and resilient (BSR), we focused on the dynamics around the steady state and formulated objective functions for BSR by partial linear approximation of the focused region. Using the objective functions and modified global cluster Newton method, we developed an algorithm for a thorough exploration of the allowable parameter space for biological systems (TEAPS). We first applied TEAPS to the NF-κB signaling model. This system shows a damped oscillation after stimulation and seems to fit the BSR constraint. By applying TEAPS, we found several directions in parameter space which stringently determines the BSR property. In such directions, the experimentally fitted parameter values were included in the range of the obtained parameter sets. The arachidonic acid metabolic pathway model was used as a model related to pharmacological responses. The pharmacological effects of nonsteroidal anti-inflammatory drugs were simulated using the parameter sets obtained by TEAPS. The structural properties of the system were partly extracted by analyzing the distribution of the obtained parameter sets. In addition, the simulations showed inter-drug differences in prostacyclin to thromboxane A2 ratio such that aspirin treatment tends to increase the ratio, while rofecoxib treatment tends to decrease it. These trends are comparable to the clinical observations. These results on real biological models suggest that the parameter sets satisfying the BSR condition can help in finding biologically plausible parameter sets and understanding the properties of biological systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Cook, Eli. "THE NEOCLASSICAL CLUB: IRVING FISHER AND THE PROGRESSIVE ORIGINS OF NEOLIBERALISM." Journal of the Gilded Age and Progressive Era 15, no. 3 (July 2016): 246–62. http://dx.doi.org/10.1017/s1537781416000104.

Full text
Abstract:
AbstractIn examining the mathematical models, theories of value, and price statistics wielded by leading economist and social reformer Irving Fisher, this article explores the overlooked impact that Neoclassical Economics had on Progressive Era reform and thought. By offering a neoclassical theory of marginal utility that claimed that market prices reflected subjective value, Fisher formalized, legitimized, and popularized the use of price statistics in progressive political discourse, teaching the American people that if they wanted to argue over the nature of progress or the worthiness of a certain reform, they would have to price it first. The article argues that such a “pricing of progressivism” served as an important foundational precursor to the rise of neoliberal thought in the 1980s. In light of such a significant intellectual legacy, it seems imperative that intellectual historians of the Progressive Era turn their attention away from the usual suspects of this period, such as Pragmatists William James and John Dewey, and shift their analytical focus away from the “Metaphysical Club” and toward a neoclassical one.
APA, Harvard, Vancouver, ISO, and other styles
36

Kazibudzki, Pawel Tadeusz. "AN EXPONENTIAL AND LOGARITHMIC UTILITY APPROACH TO EIGENVECTOR METHOD IN THE ANALYTIC HIERARCHY PROCESS: THE IDEA VALIDATION WITH EXAMPLE LOGISTIC APPLICATION." Problems of Management in the 21st Century 2, no. 1 (December 5, 2011): 85–94. http://dx.doi.org/10.33225/pmc/11.02.85.

Full text
Abstract:
Thorough inventory control, planning and coordination, wise and thoughtful selection of warehousing and means of transport are necessary for efficient decisions making in logistics. In order to make optimal decisions, it is necessary to deliberate many options from the perspective of different criteria. It was empirically verified that such problems are difficult to solve without methodological support as they exceed humans’ perception capabilities. However, there are some techniques which can facilitate the process of decision making when successfully applied. One of the most commonly used technique for structured problems solving is the Analytic Hierarchy Process. It is popular because its prescribed procedure supports sophisticated mathematical theory called the Eigenvalue Method which is accurate and unique. It does not mean that there are not other methods that can support the Analytic Hierarchy Process although their application has pros and cons. Two relatively novel methods were proposed recently and their validation studies are the essential part of this research. In comparison with others, their simplicity and compliance with the Eigenvalue Method are their most crucial but not exclusive advantages. At the end of the research, the example logistic problem was solved with their application, and the results of all methods were compared. Keywords: Analytic Hierarchy Process, eigenvalue method, optimization models, decision support techniques, selection of transportation modes.
APA, Harvard, Vancouver, ISO, and other styles
37

Reza-Gharehbagh, Raziyeh, Ashkan Hafezalkotob, Ahmad Makui, and Mohammad Kazem Sayadi. "Government intervention policies in competition of financial chains: a game theory approach." Kybernetes 49, no. 3 (August 1, 2019): 960–81. http://dx.doi.org/10.1108/k-10-2018-0539.

Full text
Abstract:
Purpose This study aims to analyze the competition of two financial chains (FCs) when the government intervenes in the financial market to prohibit the excessively high-interest rate by minimizing the arbitrages caused by speculative transactions. Each FC comprises an investor and one intermediary, attempts to finance the capital-constrained firms in financing needs. Design/methodology/approach Using a Stackelberg game theoretic framework and formulating two- and three-level optimization problems for six possible scenarios, the authors establish an integrative framework to evaluate the scenarios through the lens of the two main decision-making structures of the FCs (i.e. centralized and decentralized) and three policies of the government (i.e. speculation minimizing, revenue gaining and utility maximizing). Findings Solving the problem results in optimal values for tariffs, which guarantee a stable competitive market. Consequently, policymaking by the government influences the decision variables, which is shown in a numerical study. The authors find that the government can orchestrate the FCs in the competitive market by imposing tariffs and prohibiting high-interest rates via regulating the speculation impacts, which guarantees a stable market and facilitates the financing of capital-constrained firms. Research limitations/implications This paper aids the financial markets and governments to control the interest rate by minimizing the speculation level. Originality/value This paper investigates the impact of government intervention policies – as a leading player – on the competition of FCs – as followers – in providing financial services and making profits. The government imposes tariffs on the interest rate to stabilize the market by limiting speculative transactions. The paper presents the mathematical models of the optimization problems through the game-theoretic framework and comparison of the scenarios through a numerical experiment.
APA, Harvard, Vancouver, ISO, and other styles
38

Dehdashti, Shahram, Lauren Fell, Abdul Karim Obeid, Catarina Moreira, and Peter Bruza. "Bistable probabilities: a unified framework for studying rationality and irrationality in classical and quantum games." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2237 (May 2020): 20190839. http://dx.doi.org/10.1098/rspa.2019.0839.

Full text
Abstract:
This article presents a unified probabilistic framework that allows both rational and irrational decision-making to be theoretically investigated and simulated in classical and quantum games. Rational choice theory is a basic component of game-theoretic models, which assumes that a decision-maker chooses the best action according to their preferences. In this article, we define irrationality as a deviation from a rational choice. Bistable probabilities are proposed as a principled and straightforward means for modelling (ir)rational decision-making in games. Bistable variants of classical and quantum Prisoner’s Dilemma, Stag Hunt and Chicken are analysed in order to assess the effect of (ir)rationality on agent utility and Nash equilibria. It was found that up to three Nash equilibria exist for all three classical bistable games and maximal utility was attained when agents were rational. Up to three Nash equilibria exist for all three quantum bistable games; however, utility was shown to increase according to higher levels of agent irrationality.
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Abhinav, and Pierre F. J. Lermusiaux. "Neural closure models for dynamical systems." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 477, no. 2252 (August 2021): 20201004. http://dx.doi.org/10.1098/rspa.2020.1004.

Full text
Abstract:
Complex dynamical systems are used for predictions in many domains. Because of computational costs, models are truncated, coarsened or aggregated. As the neglected and unresolved terms become important, the utility of model predictions diminishes. We develop a novel, versatile and rigorous methodology to learn non-Markovian closure parametrizations for known-physics/low-fidelity models using data from high-fidelity simulations. The new neural closure models augment low-fidelity models with neural delay differential equations (nDDEs), motivated by the Mori–Zwanzig formulation and the inherent delays in complex dynamical systems. We demonstrate that neural closures efficiently account for truncated modes in reduced-order-models, capture the effects of subgrid-scale processes in coarse models and augment the simplification of complex biological and physical–biogeochemical models. We find that using non-Markovian over Markovian closures improves long-term prediction accuracy and requires smaller networks. We derive adjoint equations and network architectures needed to efficiently implement the new discrete and distributed nDDEs, for any time-integration schemes and allowing non-uniformly spaced temporal training data. The performance of discrete over distributed delays in closure models is explained using information theory, and we find an optimal amount of past information for a specified architecture. Finally, we analyse computational complexity and explain the limited additional cost due to neural closure models.
APA, Harvard, Vancouver, ISO, and other styles
40

Tibes, Raoul, Natalie Meurice, Steven P. Anthony, YiHua Qiu, Steven M. Kornblau, Spyro Mousses, and Joachim Petit. "Mathematical Models To Identify Combined Protein and Clinical Biomarkers of Response to Induction Chemotherapy in Acute Myeloid Leukemia (AML)." Blood 110, no. 11 (November 16, 2007): 2399. http://dx.doi.org/10.1182/blood.v110.11.2399.2399.

Full text
Abstract:
Abstract Treatment of AML is guided by cytogenetics, age, patients’ medical condition and increasingly, new molecular markers. However, no definitive algorithms and biomarkers predicting response to induction chemotherapy exist. We recently showed that distinct protein profiles of AML correlate with cytogenetics, FAB classes and treatment outcome for a class/group of patients as a whole (Tibes, ASH 2005; Kornblau, ASH 2006 and manuscript in submission). To further individualize therapy, we applied a set-based decision modeling methodology based on the Rough Set Theory (RST) to our initial proteomic dataset of 34 (phospho)-proteins for 96 samples from 73 patients (Tibes, ASH 2005). The first model was built based on partitioning discretized protein expression levels of proteins for the 96 samples into 2 outcome categories: complete remission (CR) vs. refractory to treatment; we generated reducts for all proteins and computed pseudo-cores. Several proteins (incl. phospho-NPM1, PTEN, phospho-AKT, p53, cyclin D1 etc.) were able to distinguish outcomes in terms of CR and refractoriness. A second model based on discretized clinical condition attributes (e.g., blast %, FAB, cytogenetics, age, sex, blood counts etc.) yielded the most important clinical reducts. In a third model, protein expression and clinical attributes were combined to generate decision rules to predict outcome to induction therapy for each patient. For proof-of-concept purposes, the data set was divided into a training set and a test set. Prediction accuracy for CR vs. refractory state was in the 75% range for individual patients based on their pre-treatment clinical and proteomic attributes. In conclusion, predictive models obtained with RST yield reducts corresponding to a small but significant number of proteins (6–8) and clinical parameters which can further be used to derive association rules, capable of accurately predicting a patient’s response to induction therapy. The availability of several dozens of distinct decisions rules within each group (CR vs. refractory) based on protein expression and clinical characteristics, allows an individualized approach for each patient. This preliminary study is now being expanded on an independent proteomic dataset of 52 proteins for 258 AML samples (Kornblau, ASH 2006) to be presented at the Annual Meeting. The clinical utility lies in the fact that measuring expression levels of our protein biomarker profiles can be adopted to routine clinical testing (e.g., immunohistochemistry on diagnosis marrows) and potentially help to direct patients towards standard chemotherapy, upfront treatment on a clinical trial or evaluation for early transplant depending on the chance of response and relapse. Lastly, this model can be applied to predict other clinical outcomes (relapse vs. continuous CR), as well as to derive predictive signatures of gene expression datasets.
APA, Harvard, Vancouver, ISO, and other styles
41

Hasan, Zubair. "Maximization Postulates and Their Efficacy for Islamic Economics." American Journal of Islam and Society 19, no. 1 (January 1, 2002): 95–118. http://dx.doi.org/10.35632/ajis.v19i1.1977.

Full text
Abstract:
This paper examines the nature and role of maximization postu­lates concerning profit and utility in the mainstream price theory formation, from a methodological perspective. Mainstream eco­nomics retains these postulates, despite much criticism, mainly for two reasons. Firstly, they help establish cause-effect linkages among economic variables and markets. In that they greatly facil­itate predictions and their empirical veri_fication over a wide field of inquiry. Secondly, no other behavioral rule has so far been established that gives equally valid, if not superior, results over such range. ft is argued that the postulates are required in fslamic econom­ics as well for the same reasons. Maximization, per se, is not un-Islamic: what is maximized, how and for what purpose are the real issues to investigate before passing judgment. Contrary to the current position in the I iterature, we find it preferable to include moral values and social considerations of Islam in the assumptions of economic theorems, rather than attempting to include them in the objective elements of the models, until Islamic economics evolves as an independent subject. For max­imization is a mathematical concept, and cannot fruitfully accommodate what cannot somehow be measured.
APA, Harvard, Vancouver, ISO, and other styles
42

Muse, Abdisalam Hassan, Samuel Mwalili, Oscar Ngesa, Christophe Chesneau, Huda M. Alshanbari, and Abdal-Aziz H. El-Bagoury. "Amoud Class for Hazard-Based and Odds-Based Regression Models: Application to Oncology Studies." Axioms 11, no. 11 (November 1, 2022): 606. http://dx.doi.org/10.3390/axioms11110606.

Full text
Abstract:
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of models. This generality was attained using a structure resembling the general class of hazard-based regression models, with the addition that the baseline odds function is multiplied by a link function. The class is broad enough to cover a number of widely used models, including the proportional hazard model, the general hazard model, the proportional odds model, the general odds model, the accelerated hazards model, the accelerated odds model, and the accelerated failure time model, as well as combinations of these. The proposed class incorporates the analysis of crossing survival curves. Based on a versatile parametric distribution (generalized log-logistic) for the baseline hazard, we introduced a technique for applying these various hazard-based and odds-based regression models. This distribution allows us to cover the most common hazard rate shapes in practice (decreasing, constant, increasing, unimodal, and reversible unimodal), and various common survival distributions (Weibull, Burr-XII, log-logistic, exponential) are its special cases. The proposed model has good inferential features, and it performs well when different information criteria and likelihood ratio tests are used to select hazard-based and odds-based regression models. The proposed model’s utility is demonstrated by an application to a right-censored lifetime dataset with crossing survival curves.
APA, Harvard, Vancouver, ISO, and other styles
43

Rass, Stefan, Sandra König, and Stefan Schauer. "Games over Probability Distributions Revisited: New Equilibrium Models and Refinements." Games 13, no. 6 (December 1, 2022): 80. http://dx.doi.org/10.3390/g13060080.

Full text
Abstract:
This article is an overview of recent progress on a theory of games, whose payoffs are probability distributions rather than real numbers, and which have their equilibria defined and computed over a (suitably restricted yet dense) set of distributions. While the classical method of defining game models with real-valued utility functions has proven strikingly successful in many domains, some use cases from the security area revealed shortcomings of the classical real-valued game models. These issues motivated the use of probability distributions as a more complex object to express revenues. The resulting class of games displays a variety of phenomena not encountered in classical games, such as games that have continuous payoff functions but still no equilibrium, or games that are zero-sum but for which fictitious play does not converge. We discuss suitable restrictions of how such games should be defined to allow the definition of equilibria, and show the notion of a lexicographic Nash equilibrium, as a proposed solution concept in this generalized class of games.
APA, Harvard, Vancouver, ISO, and other styles
44

Tamposis, Ioannis A., Konstantinos D. Tsirigos, Margarita C. Theodoropoulou, Panagiota I. Kontou, Georgios N. Tsaousis, Dimitra Sarantopoulou, Zoi I. Litou, and Pantelis G. Bagos. "JUCHMME: a Java Utility for Class Hidden Markov Models and Extensions for biological sequence analysis." Bioinformatics 35, no. 24 (June 28, 2019): 5309–12. http://dx.doi.org/10.1093/bioinformatics/btz533.

Full text
Abstract:
Abstract Summary JUCHMME is an open-source software package designed to fit arbitrary custom Hidden Markov Models (HMMs) with a discrete alphabet of symbols. We incorporate a large collection of standard algorithms for HMMs as well as a number of extensions and evaluate the software on various biological problems. Importantly, the JUCHMME toolkit includes several additional features that allow for easy building and evaluation of custom HMMs, which could be a useful resource for the research community. Availability and implementation http://www.compgen.org/tools/juchmme, https://github.com/pbagos/juchmme. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
45

Finkel, Justin, Dorian S. Abbot, and Jonathan Weare. "Path Properties of Atmospheric Transitions: Illustration with a Low-Order Sudden Stratospheric Warming Model." Journal of the Atmospheric Sciences 77, no. 7 (July 1, 2020): 2327–47. http://dx.doi.org/10.1175/jas-d-19-0278.1.

Full text
Abstract:
AbstractMany rare weather events, including hurricanes, droughts, and floods, dramatically impact human life. To accurately forecast these events and characterize their climatology requires specialized mathematical techniques to fully leverage the limited data that are available. Here we describe transition path theory (TPT), a framework originally developed for molecular simulation, and argue that it is a useful paradigm for developing mechanistic understanding of rare climate events. TPT provides a method to calculate statistical properties of the paths into the event. As an initial demonstration of the utility of TPT, we analyze a low-order model of sudden stratospheric warming (SSW), a dramatic disturbance to the polar vortex that can induce extreme cold spells at the surface in the midlatitudes. SSW events pose a major challenge for seasonal weather prediction because of their rapid, complex onset and development. Climate models struggle to capture the long-term statistics of SSW, owing to their diversity and intermittent nature. We use a stochastically forced Holton–Mass-type model with two stable states, corresponding to radiative equilibrium and a vacillating SSW-like regime. In this stochastic bistable setting, from certain probabilistic forecasts TPT facilitates estimation of dominant transition pathways and return times of transitions. These “dynamical statistics” are obtained by solving partial differential equations in the model’s phase space. With future application to more complex models, TPT and its constituent quantities promise to improve the predictability of extreme weather events through both generation and principled evaluation of forecasts.
APA, Harvard, Vancouver, ISO, and other styles
46

Jung, Eui Guk, and Joon Hong Boo. "A Novel Analytical Modeling of a Loop Heat Pipe Employing Thin-Film Theory: Part II—Experimental Validation." Energies 12, no. 12 (June 22, 2019): 2403. http://dx.doi.org/10.3390/en12122403.

Full text
Abstract:
Part I of this study introduced a mathematical model capable of predicting the steady-state performance of a loop heat pipe (LHP) with enhanced rationality and accuracy. Additionally, investigation of the effect of design parameters on the LHP thermal performance was also reported in Part I. The objective of Part II is to experimentally verify the utility of the steady-state analytical model proposed in Part I. To this end, an experimental device comprising a flat-evaporator LHP (FLHP) was designed and fabricated. Methanol was used as the working fluid, and stainless steel as the wall and tubing-system material. The capillary structure in the evaporator was made of polypropylene wick of porosity 47%. To provide vapor removal passages, axial grooves with inverted trapezoidal cross-section were machined at the inner wall of the flat evaporator. Both the evaporator and condenser components measure 40 × 50 mm (W × L). The inner diameters of the tubes constituting the liquid- and vapor-transport lines measure 2 mm and 4 mm, respectively, and the lengths of these lines are 0.5 m. The maximum input thermal load was 90 W in the horizontal alignment with a coolant temperature of 10 °C. Validity of the said steady-state analysis model was verified for both the flat and cylindrical evaporator LHP (CLHP) models in the light of experimental results. The observed difference in temperature values between the proposed model and experiment was less than 4% based on the absolute temperature. Correspondingly, a maximum error of 6% was observed with regard to thermal resistance. The proposed model is considered capable of providing more accurate performance prediction of an LHP.
APA, Harvard, Vancouver, ISO, and other styles
47

Yao, Shixuan, Xiaochen Liu, Yinghui Zhang, and Ze Cui. "An approach to solving optimal control problems of nonlinear systems by introducing detail-reward mechanism in deep reinforcement learning." Mathematical Biosciences and Engineering 19, no. 9 (2022): 9258–90. http://dx.doi.org/10.3934/mbe.2022430.

Full text
Abstract:
<abstract> <p>In recent years, dynamic programming and reinforcement learning theory have been widely used to solve the nonlinear control system (NCS). Among them, many achievements have been made in the construction of network model and system stability analysis, but there is little research on establishing control strategy based on the detailed requirements of control process. Spurred by this trend, this paper proposes a detail-reward mechanism (DRM) by constructing the reward function composed of the individual detail evaluation functions in order to replace the utility function in the Hamilton-Jacobi-Bellman (HJB) equation. And this method is introduced into a wider range of deep reinforcement learning algorithms to solve optimization problems in NCS. After the mathematical description of the relevant characteristics of NCS, the stability of iterative control law is proved by Lyapunov function. With the inverted pendulum system as the experiment object, the dynamic environment is designed and the reward function is established by using the DRM. Finally, three deep reinforcement learning algorithm models are designed in the dynamic environment, which are based on Deep Q-Networks, policy gradient and actor-critic. The effects of different reward functions on the experimental accuracy are compared. The experimental results show that in NCS, using the DRM to replace the utility function in the HJB equation is more in line with the detailed requirements of the designer for the whole control process. By observing the characteristics of the system, designing the reward function and selecting the appropriate deep reinforcement learning algorithm model, the optimization problem of NCS can be solved.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
48

Joslyn, Cliff A., Lauren Charles, Chris DePerno, Nicholas Gould, Kathleen Nowak, Brenda Praggastis, Emilie Purvine, Michael Robinson, Jennifer Strules, and Paul Whitney. "A Sheaf Theoretical Approach to Uncertainty Quantification of Heterogeneous Geolocation Information." Sensors 20, no. 12 (June 17, 2020): 3418. http://dx.doi.org/10.3390/s20123418.

Full text
Abstract:
Integration of multiple, heterogeneous sensors is a challenging problem across a range of applications. Prominent among these are multi-target tracking, where one must combine observations from different sensor types in a meaningful and efficient way to track multiple targets. Because different sensors have differing error models, we seek a theoretically justified quantification of the agreement among ensembles of sensors, both overall for a sensor collection, and also at a fine-grained level specifying pairwise and multi-way interactions among sensors. We demonstrate that the theory of mathematical sheaves provides a unified answer to this need, supporting both quantitative and qualitative data. Furthermore, the theory provides algorithms to globalize data across the network of deployed sensors, and to diagnose issues when the data do not globalize cleanly. We demonstrate and illustrate the utility of sheaf-based tracking models based on experimental data of a wild population of black bears in Asheville, North Carolina. A measurement model involving four sensors deployed among the bears and the team of scientists charged with tracking their location is deployed. This provides a sheaf-based integration model which is small enough to fully interpret, but of sufficient complexity to demonstrate the sheaf’s ability to recover a holistic picture of the locations and behaviors of both individual bears and the bear-human tracking system. A statistical approach was developed in parallel for comparison, a dynamic linear model which was estimated using a Kalman filter. This approach also recovered bear and human locations and sensor accuracies. When the observations are normalized into a common coordinate system, the structure of the dynamic linear observation model recapitulates the structure of the sheaf model, demonstrating the canonicity of the sheaf-based approach. However, when the observations are not so normalized, the sheaf model still remains valid.
APA, Harvard, Vancouver, ISO, and other styles
49

Madbouly, Magda M., Yasser F. Mokhtar, and Saad M. Darwish. "Quantum Game Application to Recovery Problem in Mobile Database." Symmetry 13, no. 11 (October 20, 2021): 1984. http://dx.doi.org/10.3390/sym13111984.

Full text
Abstract:
Mobile Computing (MC) is a relatively new concept in the world of distributed computing that is rapidly gaining traction. Due to the dynamic nature of mobility and the limited bandwidth available on wireless networks, this new computing environment for mobile devices presents significant challenges in terms of fault-tolerant system development. As a consequence, traditional fault-tolerance techniques are inherently inapplicable to these systems. External circumstances often expose mobile systems to failures in communication or data storage. In this article, a quantum game theory-based recovery model is proposed in the case of a mobile host’s failure. Several of the state-of-the-art recovery protocols are selected and analyzed in order to identify the most important variables influencing the recovery mechanism, such as the number of processes, the time needed to send messages, and the number of messages logged-in time. Quantum game theory is then adapted to select the optimal recovery method for the given environment variables using the proposed utility matrix of three players. Game theory is the study of mathematical models of situations in which intelligent rational decision-makers face conflicting interests (alternative recovery procedures). The purpose of this study is to present an adaptive algorithm based on quantum game theory for selecting the most efficient context-aware computing recovery procedure. The transition from a classical to a quantum domain is accomplished in the proposed model by treating strategies as a Hilbert space rather than a discrete set and then allowing for the existence of linear superpositions between classical strategies; this naturally increases the number of possible strategic choices available to each player from a numerable to a continuous set. Numerical data are provided to demonstrate feasibility.
APA, Harvard, Vancouver, ISO, and other styles
50

Falk, Carl F., and Scott Monroe. "On Lagrange Multiplier Tests in Multidimensional Item Response Theory: Information Matrices and Model Misspecification." Educational and Psychological Measurement 78, no. 4 (July 6, 2017): 653–78. http://dx.doi.org/10.1177/0013164417714506.

Full text
Abstract:
Lagrange multiplier (LM) or score tests have seen renewed interest for the purpose of diagnosing misspecification in item response theory (IRT) models. LM tests can also be used to test whether parameters differ from a fixed value. We argue that the utility of LM tests depends on both the method used to compute the test and the degree of misspecification in the initially fitted model. We demonstrate both of these points in the context of a multidimensional IRT framework. Through an extensive Monte Carlo simulation study, we examine the performance of LM tests under varying degrees of model misspecification, model size, and different information matrix approximations. A generalized LM test designed specifically for use under misspecification, which has apparently not been previously studied in an IRT framework, performed the best in our simulations. Finally, we reemphasize caution in using LM tests for model specification searches.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography