Dissertations / Theses on the topic 'Decision making – Mathematical models'

To see the other types of publications on this topic, follow the link: Decision making – Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Decision making – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tecle, Aregai 1948. "Choice of multicriterion decision making techniques for watershed management." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/191145.

Full text
Abstract:
The problem of selecting a multicriterion decision making (MCDM) technique for watershed resources management is investigated. Of explicit concern in this research is the matching of a watersned resources management problem with an appropriate MCDM technique. More than seventy techniques are recognized while reviewing the area of MCDM. A new classification scheme is developed to categorize these techniques into four groups on the bases of each algorithm's structural formulation and the possible results obtained by using the algorithm. Other standard classification schemes are also discussed to better understand the differences and similarities among the techniques and thereby demonstrate the importance of matching a particular multicriterion decision problem with an appropriate MCDM technique. The desire for selecting the most appropriate MCDM technique for watershed resources management lead to the development of 49 technique choice criteria and an algorithm for selecting a technique. The algorithm divides the technique choice criteria into four groups: (1) DM/analyst-related criteria, (2) technique-related criteria, (3) problem-related criteria and (4) solution-related criteria. To analyze the applicability of MCDM techniques to a particular problem, the levels of performance of the techniques in solving the problem are, at first, evaluated with respect to the choice criteria in each criterion group resulting in four sets of preference rankings. These four sets are then linearly combined using a set of trade-off parameters to determine the overall preference ranking of the techniques. The MUM technique selection process is itself modeled as a multiobjective problem. In this research, for example, a set of 15 techniques, the author is familiar with, are analyzed for their appropriateness to solve a watershed resources management problem. The performance levels of the 15 MCDM techniques in solving such a problem are evaluated with respect to a selected set of technique choice criteria in each criterion group leading to a set of four evaluation matrices of choice criteria versus alternative techniques. This technique choice problem is then analyzed using a two-stage evaluation procedure known as composite programming. The final product of the process resulted in a preference ranking of the alternative MCDM techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Tabaeh, Izadi Masoumeh. "On knowledge representation and decision making under uncertainty." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103012.

Full text
Abstract:
Designing systems with the ability to make optimal decisions under uncertainty is one of the goals of artificial intelligence. However, in many applications the design of optimal planners is complicated due to imprecise inputs and uncertain outputs resulting from stochastic dynamics. Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical framework to model these kinds of problems. However, the high computational demand of solution methods for POMDPs is a drawback for applying them in practice.
In this thesis, we present a two-fold approach for improving the tractability of POMDP planning. First, we focus on designing good heuristics for POMDP approximation algorithms. We aim to scale up the efficiency of a class of POMDP approximations called point-based planning methods by designing a good planning space. We study the effect of three properties of reachable belief state points that may influence the performance of point-based approximation methods. Second, we investigate approaches to designing good controllers using an alternative representation of systems with partial observability called Predictive State Representation (PSR). This part of the thesis advocates the usefulness and practicality of PSRs in planning under uncertainty. We also attempt to move some useful characteristics of the PSR model, which has a predictive view of the world, to the POMDP model, which has a probabilistic view of the hidden states of the world. We propose a planning algorithm motivated by the connections between the two models.
APA, Harvard, Vancouver, ISO, and other styles
3

Nagashima, Kazunobu. "Inference system for selection of an appropriate multiple attribute decision making method." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Callies, Jan-Peter. "Conservative decision-making and interference in uncertain dynamical systems." Thesis, University of Oxford, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boyce, John S. "Linking PPBES and the POM with capabilities." Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FBoyce.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Duclos, Gosselin Louis. "How Managers Can Use Predictive Analysis and Mathematical Models as Decision Making Tools." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/26771/26771.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Heller, Collin M. "A computational model of engineering decision making." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50272.

Full text
Abstract:
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects. The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
APA, Harvard, Vancouver, ISO, and other styles
8

Burth, Angela J. "Virtual military markets." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FBurth.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fatah, Khwazbeen Saida. "Preference modelling approaches based on cumulative functions using simulation with applications." Thesis, University of South Wales, 2009. https://pure.southwales.ac.uk/en/studentthesis/preference-modelling-approaches-based-on-cumulative-functions-using-simulation-with-applications(60653456-e002-4a64-976a-8ae4e4fd4a7e).html.

Full text
Abstract:
In decision making problems under uncertainty, Mean Variance Model (MVM) consistent with Expected Utility Theory (EUT) plays an important role in ranking preferences for various alternative options. Despite its wide use, this model is appropriate only when random variables representing the alternative options are normally distributed and the utility function to be maximized is quadratic; both are undesirable properties to be satisfied with actual applications. In this research, a novel methodology has been adopted in developing generalized models that can reduce the deficiency of the existing models to solve large-scale decision problems, along with applications to real-world disputes. More specifically, for eliciting preferences for pairs of alternative options, two approaches are developed: one is based on Mean Variance Model (MVM), which is consistent with Expected Utility Theory (EUT), and the second is based on Analytic Hierarchy Processes (AHP). The main innovation in the first approach is in reformulating MVM to be based on cumulative functions using simulation. Two models under this approach are introduced: the first deals with ranking preferences for pairs of lotteries/options with non-negative outcomes only while the second, which is for risk modelling, is a risk-preference model that concerns normalized lotteries representing risk factors each is obtained from a multiplication decomposition of a lottery into its mean multiplied by a risk factor. Both approximation models, which are preference-based using the determined values for expected utility, have the potential to accommodate various distribution functions with different utility functions and capable of handling decision problems especially those encountered in financial economics. The study then reformulates the second approach, AHP; a new algorithm, using simulation, introduces an approximation method that restricts the level of inherent uncertainty to a certain limit. The research further focuses on proposing an integrated preference-based AHP model introducing a novel approximation stepwise algorithm that combines the two modified approaches, namely MVM and AHP; it multiplies the determined value for expected utility, which results from implementing the modified MVM, by the one obtained from processing AHP to obtain an aggregated weight indicator. The new integrated weight scale represents an accurate and flexible tool that can be employed efficiently to solve decision making problems for possible scenarios that concern financial economics Finally, to illustrate how the integrated model can be used as a practical methodology to solve real life selection problems, this research explores the first empirical case study on Tender Selection Process (TSP) in Kurdistan Region (KR) of Iraq; it is considered as an inductive and a comprehensive investigation on TSP, which has received minimum consideration in the region, and regarded as a significant contribution to this research. The implementation of the proposed model to this case study shows that, for the evaluation of construction tenders, the integrated approach is an appropriate model, which can be easily modified to assume specific conditions of the proposed project. Using simulation, generated data allows creation of a feedback system that can be utilized for the evaluation of future projects in addition to its capability to make data handling easier and the evaluation process less complex and time consuming.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Sida. "The Development and Evaluation of Aggregation Methods for Group Pairwise Comparison Judgments." PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/1222.

Full text
Abstract:
The basic problem of decision making is to choose the best alternative from a set of competing alternatives that are evaluated under conflicting criteria. In general, the process is to evaluate decision elements by quantifying the subjective judgments. The Analytic Hierarchy Process (AHP) provides us with a comprehensive framework for solving such problems. As pointed out by Saaty, AHP "enables us to cope with the intuitive, the rational, and the irrational, all at the same time, when we make multicriteria and multiactor decisions". Furthermore, in most organizations decisions are made collectively, regardless of whether the organization is public or private. It is sometimes difficult to achieve consensus among group members, or for all members of a group to meet. The purpose of this dissertation was two-fold: First, we developed a new aggregation method - Minimum Distance Method (MDM) - to support group decision process and to help the decision makers achieve consensus under the framework of AHP. Second, we evaluated the performance of aggregation methods by using accuracy and group disagreement criteria. The evaluations were performed through simulation and empirical tests. MDM • employs the general distance concept, which is very appealing to the compromise nature of a group decision making. • preserves all of the characteristics of the functional equations approach proposed by Aczel and Saaty. • is based on a goal programming model, which is easy to solve by using a commercial software such as LINDO. • provides the weighted membership capability for participants. • allows for sensitivity analysis to investigate the effect of importance levels of decision makers in the group. The conclusions include the following: • Simulation and empirical tests show that the two most important factors in the aggregation of pairwise comparison judgments are the probability distribution of error terms and the aggregation method. • Selection of the appropriate aggregation method can result in significant improvements in decision quality. • The MDM outperforms the other aggregation methods when the pairwise comparison judgments have large variances. • Some of the prioritization methods, such as EV[AA'], EV[A'A], arithmetic and geometric mean of EV[AA'] and EV[A'A], can be dropped from consideration due to their poor performance
APA, Harvard, Vancouver, ISO, and other styles
11

Abbas, Mustafa Sulaiman. "Consistency Analysis for Judgment Quantification in Hierarchical Decision Model." PDXScholar, 2016. https://pdxscholar.library.pdx.edu/open_access_etds/2699.

Full text
Abstract:
The objective of this research is to establish consistency thresholds linked to alpha (α) levels for HDM’s (Hierarchical Decision Model) judgment quantification method. Measuring consistency in order to control it is a crucial and inseparable part of any AHP/HDM experiment. The researchers on the subject recommend establishing thresholds that are statistically based on hypothesis testing, and are linked to the number of decision variables and (α) level. Such thresholds provide the means with which to evaluate the soundness and validity of an AHP/HDM decision. The linkage of thresholds to (α) levels allows the decision makers to set an appropriate inconsistency tolerance compatible with the situation at hand. The measurements of judgments are unreliable in the absence of an inconsistency measure that includes acceptable limits. All of this is essential to the credibility of the entire decision making process and hence is extremely useful for practitioners and researchers alike. This research includes distribution fitting for the inconsistencies. It is a valuable and interesting part of the research results and adds usefulness, practicality and insight. The superb fits obtained give confidence that all the statistical inferences based on the fitted distributions accurately reflect the HDM’s inconsistency measure.
APA, Harvard, Vancouver, ISO, and other styles
12

Nori, Vijay S. "Algorithms for dynamic and stochastic logistics problems." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/24513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cho, Young Jin. "Effects of decomposition level on the intrarater reliability of multiattribute alternative evaluation." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-171537/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Levy, Bat-Sheva. "Fuzzy logic, a model to explain students' mathematical decision-making." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0026/MQ51391.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Edirisinghe, Nalin Chanaka Perera. "Essays on bounding stochastic programming problems." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/30801.

Full text
Abstract:
Many planning problems involve choosing a set of optimal decisions for a system in the face of uncertainty of elements that may play a central role in the way the system is analyzed and operated. During the past decade, there has been a renewed interest in the modelling, analysis, and solution of such problems due to a remarkable development of both new theoretical results and novel computational techniques in stochastic optimization. A prominent approach is to develop upper and lower bounding approximations to the problem along with procedures to sharpen bounds until an acceptable tolerance is satisfied. The contributions of this dissertation are concerned with the latter approach. The thesis first studies the stochastic linear programming problem with randomness in both the objective coefficients and the constraints. A convex-concave saddle property of the value function is utilized to derive new bounding techniques which generalize previously known results. These approximations require discretizing bounded domains of the random variables in such a way that tight upper and lower bounds result. Such techniques will prove attractive with the recent advances in large-scale linear programming. The above results are also extended to obtain new upper and lower bounds when the domains of random variables are unbounded. While these bounds are tight, the approximating models are large-scale deterministic linear programs. In particular, with a proposed order-cone decomposition for the domains, these linear programs are well-structured, thus enabling one to use efficient techniques for solution, such as parallel computation. The thesis next considers convex stochastic programs. Using aggregation concepts from the deterministic literature, new bounds are developed for the problem which are computable using standard convex programming algorithms. Finally, the discussion is focused on a stochastic convex program arising in a certain resource allocation problem. Exploiting the problem structure, bounds are developed via the Karush-Kuhn-Tucker conditions. Rather than discretizing domains, these approximations advocate replacing difficult multidimensional integrals by a series of simple univariate integrals. Such practice allows one to preserve differentiability properties so that smooth convex programming methods can be applied for solution.
Business, Sauder School of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Li Da. "Fuzzy multiobjective mathematical programming in economic systems analysis: design and method." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/471.

Full text
Abstract:
Economic systems analysis is a systems analysis technique of setting out the factors that have to be taken into account in making economic systems decisions. The inquiring and operational systems of the technique are almost exclusively designed for well-structured systems. In review of economic systems analysis against systems thinking, there is a growing tendency to discard the analytical approach as inappropriate for dealing with an ill-structured issue. Therefore, economic systems analysis needs both the inquiring and operational systems which are appropriate for ill-structured systems. The foregoing leads to the introduction of an extensive methodology. Mainly, the weakness of economic systems analysis methodology can be traced to the philosophical paradigm upon which the technique is based. In this study, four main aspects of both the inquiring and operational systems of economic systems analysis are being explored: (1) A new philosophical paradigm is proposed as the foundation of general methodology in place of the traditional Newtonian-Kantian inquiring system. (2) The new philosophical paradigm needs new problem formulation and analysis space; therefore, a multidimensional, synergetic, and autopoietic model is proposed for systems synthesis and systems analysis. (3) The new philosophical paradigm is characterized as a Singerian inquiry, and as a result, Marglin's multiobjective analysis is replaced by a Singerian multiobjective analysis. (4) Markov communication theory and fuzzy sets theory are proposed as tools for handling complexity. Markov communication theory and fuzzy sets theory are introduced for systems design and multiple objective analysis. This study reports on the first application of a Singerian fuzzy multiobjective mathematical algorithm in economic systems analysis, concluding that fuzzy systems theory, especially Markov communication theory, can realize approximate reasoning in economic systems analysis. Fuzzy modeling offers a deeper understanding of complexity and a means of expressing the insights that result from that understanding; moreover, it provides a means of incorporating subjectivity and adaptation. Therefore, fuzzy modeling increases the validity of the systems approach for dealing with ill-structured systems. The proposed method represents an important theoretical improvement of Marglin's approach. The results, however, also hold practical importance, for they are of practical interest to systems analysts who would improve systems design and multiobjective analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Burnett, Sulene. "A simplified numerical decision making toolbox for physical asset management decisions." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85626.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: The management of physical assets has become a popular eld of study over recent years and is being acknowledged in multiple disciplines world wide. In this project, research on Physical Asset Management (PAM), maintenance and decision making are presented. PAM is a complex subject and requires the participation of multiple disciplines in order to successfully manage physical assets. Moreover, the management of maintenance makes a big contribution in achieving successful PAM. Decision making is a core element to manage maintenance e ciently, both on strategic and operational level. Various methods and techniques can be used to aid the decision making process such as, using past experience, xed decision making techniques and techniques involving numerical calculations, to mention only a few. However, using numerical calculations to make decisions are not very popular. This is due to various reasons, for example the inherent complexity of the mathematics and the time required to execute such calculations are disliked. People tend to avoid complex numerical calculations and rather rely on past experience and discussion of circulating opinions to make decisions. This is not ideal and can lead to inconsistent and inaccurate decisions. In this project, the importance of numerical decision making is researched, especially in maintenance related decisions. The focus is placed on the simpli cation of numerical decision making techniques with the aim to make it easy and quick to use to support operational PAM decisions. Di erent decisions regarding PAM, especially decisions with regards to managing maintenance in order to achieve PAM, are discussed by means of a literature study. This is done to clarify the applicability of using numerical decision making techniques to support this type of decisions. A few di erent available numerical techniques are highlighted that can be used to support the decision making process. The decisions together with numerical decision making techniques are evaluated in order to combine the most appropriate techniques in a simpli ed manner. The purpose of this is that it can be used by anyone with the necessary knowledge of a speci c system or operation. As a result a simpli ed numerical decision making toolbox is developed that can support maintenance related decision. This toolbox is applied to a real life situation by means of a case study, made possible by Anglo American Platinum Limited (Amplats). An evaluation and validation of the toolbox is done through the case study to conclude wether it has value in practice or not.
AFRIKAANSE OPSOMMING: Die bestuur van siese bates het die afgelope paar jaar 'n gewilde studieveld geword en word erken in verskeie dissiplines reg oor die w^ereld. In hierdie projek word navorsing gedoen oor Fisiese Bate Bestuur (FBB), instandhouding en besluitneming. FBB is 'n komplekse onderwerp en vereis die deelname van verskeie dissiplines om sukses te behaal. Die bestuur van instandhouding maak 'n groot bydrae tot suksesvolle FBB. 'n Kern element van doeltre ende instandhouding is besluitneming, beide op strategiese en operasionele vlak. Verskillende metodes en tegnieke kan gebruik word om die besluitnemingsproses te ondersteun soos byvoorbeeld om gebruik te maak van ondervinding en vorige gebeurtenisse, vaste besluitnemingstegnieke, tegnieke wat numeriese berekeninge gebruik en nog meer. Die gebruik van numeriese metodes om besluite te neem is nie baie gewild nie. Dit is as gevolg van verskeie redes soos byvoorbeeld die inherente kompleksiteit en ingewikkeldheid van die wiskunde en ook die tyd wat benodig word om sulke berekeninge uit te voer. Mense is geneig om ingewikkelde numeriese berekeninge te vermy en eerder staat te maak op vorige ervaring en die bespreking van menings om besluite te neem. Dit is nie ideaal nie en kan lei tot onkonsekwente besluite, of selfs verkeerde besluite. In hierdie projek is die belangrikheid van numeriese besluitneming nagevors, veral in die onderhoudsverwante besluite. Die fokus word geplaas op die vereenvoudiging van die numeriese besluitnemings tegnieke. Die doel is om dit op so 'n manier te vereenvoudig dat dit maklik en vinnig is om te gebruik vir operasionele FBB besluite. Verskillende besluite oor FBB, veral besluite met betrekking tot instandhouding om suksesvolle FBB te bereik, word bespreek deur middel van 'n literatuurstudie. Die literatuurstudie ondersoek die toepaslikheid van die gebruik van numeriese besluitnemingstegnieke vir hierdie soort besluite. 'n Paar verskillende beskikbare numeriese tegnieke wat gebruik kan word om die besluitnemingsproses te ondersteun word uitgelig. Die besluite, saam met numeriese besluitnemingtegnieke, word ge evalueer om die mees gepaste tegnieke te kombineer in 'n vereenvoudigde manier. Uiteindelik moet dit deur enige iemand met die nodige kennis van 'n spesi eke stelsel of proses gebruik kan word. As resultaat is 'n vereenvoudigde numeriese besluitnemingstegniekkombinasie ontwikkel wat besluite verwant aan instandhouding kan ondersteun. Hierdie tegniek-kombinasie word toegepas in 'n werklike situasie deur middel van 'n gevallestudie, wat moontlik gemaak is deur Anglo American Platinum Limited. 'n Evaluering en validering van die tegniek-kombinasie word gedoen in die gevallestudie om te bepaal of dit wel waarde het in die praktyk of nie.
APA, Harvard, Vancouver, ISO, and other styles
18

Prat, Ortega Genís. "Attractor dynamics in perceptual decision making: from theoretical predictions to psychophysical experiments." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/669850.

Full text
Abstract:
De tant en tant, els humans i els animals en general hem de respondre a certs estímuls, que poden ser ambigus. Fa uns anys, abans que s’introduís el videoarbitratge (VAR; video assistant referee), els àrbitres de futbol no ho tenien fàcil. Una de les jugades més populars de la història del futbol és “La Mano de Dios”, en referència al gol que Maradona va marcar amb la mà als quarts de final de la Copa del Món de Futbol del 1986. Basant-se en el que va veure, l'àrbitre va decidir incorrectament que Maradona no havia tocat la pilota amb la mà i, gràcies a això, Argentina va acabar guanyant la Copa del Món. Les decisions basades en estímuls externs (en aquest cas visuals) és el que anomenem presa de decisions perceptual. En aquesta tesi hem estudiat com el cervell pren decisions perceptuals en condicions experimentals controlades. En aquests experiments els participants han de prendre una decisió categòrica sobre una característica específica dels estímuls presentats. Concretament hem estudiat el cas en què la durada de l'estímul està controlada per l'investigador. Durant l'exposició de l'estímul, els participants han d'acumular evidència i quan aquest acaba han de prendre una decisió entre dues possibles alternatives. Aquests experiments s'anomenen 2-alternative forced choice task (2AFC).  Des d'un punt vista computacional l'acumulació d'evidència dels estímuls en experiments tipus 2AFC s'ha estudiat profundament en les últimes dècades. Els models estàndards que descriuen aquesta funció cognitiva estan basats en processos de difusió, que assumeix que la integració de l'evidència de l'estímul és perfecte. Tanmateix, la relació d'aquests models amb els mecanismes subjacents no està ben estudiada. En aquesta tesi hem estudiat el procés d'acumulació en models biofísics amb dinàmiques d’atractors. De fet, aquests models es poden reduir a un procés de difusió no lineal, que en el cas de categoritzacions binàries es pot descriure per un potencial amb dos punts fixos. Tot i que els models estàndards i biofisics utilitzen mecanismes diferents, els dos tipus de models poden explicar diversos resultats experimentals. El primer objectiu d'aquesta tesi és realitzar prediccions teòriques que ens ajudin a diferenciar les dinàmiques d’atractors dels models estàndars i que puguin ser provades experimentalment. Hem trobat diverses d’aquestes prediccions teòriques: 1) els models d’atractors tenen diversos règims d’integració, des de règims que donen més pes a l'evidència del principi de l’estímul, fins a règims que donen més pes a l'evidència del final de l’estímul i 2) els models d’atractors tenen una relació no monotònica de la seva precisió amb la magnitud de les fluctuacions de l’estímul.  El segon objectiu d'aquesta tesi és poder analitzar l'existència de la dinàmica d’atractors qualitativament i quantitativament. Per aconseguir-ho hem dissenyat un experiment a través del qual podem modificar sistemàticament la magnitud de les fluctuacions dels estímuls. L'anàlisi qualitatiu del resultats experimentals no és definitiu pel que fa a l'existència o la inexistència de la dinàmica d’atractors. Estem desenvolupant una eina que ens permetrà analitzar quantitativament diversos mecanismes, com per exemple la dinàmica d’atractors. Els resultat preliminars mostren que la diàmica d’atractors, entre altres mecanismes neuronals, podria ajudar a explicar els resultat experiementals de com a mínim una fracció important de participants.
From time to time humans and animals must respond to a certain stimulus that can be ambiguous. In the old days, before the creation of video assistant referee, football referees had a very hard life. One of the most popular plays in football history is the so-called “La mano de dios” where Maradona used his hand to score a goal in the quarter-final match of the 1986 World Cup. Based on what he saw, the referee incorrectly decided that Maradona had not touched the ball with his hand and Argentina ended up winning the World Cup. The decisions based on external stimuli (in this case visual) are what we call perceptual decision making. In this thesis, we studied how the brain makes perceptual decisions in experimental settings where subjects have to make a categorical decision about a certain feature of the presented stimulus. We studied the case where the stimulus is presented for a certain time controlled by the experimenter. During the stimulus presentation, the subjects have to accumulate evidence and when the stimulus ends, they need to choose between two possible alternatives. These experiments are typically called 2-alternative forced choice tasks (2AFC). From a computational point of view the accumulation of stimulus evidence in 2AFC tasks has been studied intensively in the last decades. Canonical approaches to model this cognitive function are based on diffusion processes that assume bounded or unbounded perfect stimulus evidence accumulation. However, the relationship of such models with the underlying neural circuitry is unclear. In this thesis, we study the accumulation process in neurobiological models with attractor dynamics. Such models can actually be reduced to a nonlinear diffusion process, which in the case of binary categorizations can be described by a double well potential (DW). Despite the fact that the canonical and the neurobiological models rely on different mechanisms, they can account for various behavioral aspects such as performance or reaction time. The first aim of this thesis was to derive behaviourally testable predictions of attractor dynamics during a 2AFC task and compare them with models that assume other kinds of dynamics (e.g. perfect integration). We found two signatures of attractor dynamics that can be tested in behavioural experiments. Specifically, we found that: 1) The DW model had different integration regimes, from transient (primacy) to leaky (recency) as the magnitude of the stimulus fluctuations (σs) or the stimulus duration (T) increased and 2) the DW had a non-monotonic relation between the accuracy and the stimulus fluctuations. The second aim of this thesis is to qualitatively and quantitatively test the existence of attractor dynamics. To this aim, we designed an experiment where we systematically modified the magnitude of the stimulus fluctuations. Qualititatively, we could not identify obvious signatures of attractor dynamics that allow us to distinguish between different models. However, we quantitatively assessed the attractor dynamics and other plausible neural mechanisms by developing a model-based analysis. Preliminary results suggest that attractor dynamics can be important to explain the behavioural results in at least a fraction of subjects.
APA, Harvard, Vancouver, ISO, and other styles
19

Singhal, Amod. "An evaluation methodology to ensure the success of decision support tools." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/101149.

Full text
Abstract:
Motivated by the need for an evaluation technique to help decision makers ensure the success of their computer-based decision support tools, this research explores the evaluation of decision aids from a broad organizational and managerial perspective. A review of current research identifies the need for theoretical and practical developments emphasizing: (1) evaluation techniques which can work with partial knowledge about the effect of a decision support tool on management processes, (2) a systematic way to prescribe evaluation techniques for different assessment situations, (3) evaluation techniques which provide a way to transition from one assessment situation to another, and (4) evaluation techniques which recognize that the performance of one decision support tool may depend on other decision aids used by the manager. This study complements existing theoretical research by developing seven conceptual models which identify essential evaluation parameters and their relationships. The first model explores parameters affecting the decision to evaluate. The second and third models examine the role of evaluation in ensuring success. The fourth and fifth models analyze how conclusions about success are made. The sixth model identifies components of an evaluation technique. Finally, the seventh model presents a framework for prescribing evaluation approaches. Using the seven conceptual models and previous research as its theoretical foundation, Evalu-Action, a step-by-step practical technique to ensure the success of computer-based decision support tools is developed. The technique is pilot tested and improved. Recommendations for further work are presented.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
20

關信堅 and Shun-kin Dennis Kwan. "Multi-criteria decision support using analytic hierarchy process: the case study of project site selection." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31251377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shi, Zhenzhen. "A MARKOV DECISION PROCESS EMBEDDED WITH PREDICTIVE MODELING: A MODELING APPROACH FROM SYSTEM DYNAMICS MATHEMATICAL MODELS, AGENT-BASED MODELS TO A CLINICAL DECISION MAKING." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/20578.

Full text
Abstract:
Doctor of Philosophy
Department of Industrial & Manufacturing Systems Engineering
David H. Ben-Arieh
Chih-Hang Wu
Patients who suffer from sepsis or septic shock are of great concern in the healthcare system. Recent data indicate that more than 900,000 severe sepsis or septic shock cases developed in the United States with mortality rates between 20% and 80%. In the United States alone, almost $17 billion is spent each year for the treatment of patients with sepsis. Clinical trials of treatments for sepsis have been extensively studied in the last 30 years, but there is no general agreement of the effectiveness of the proposed treatments for sepsis. Therefore, it is necessary to find accurate and effective tools that can help physicians predict the progression of disease in a patient-specific way, and then provide physicians recommendation on the treatment of sepsis to lower risk for patients dying from sepsis. The goal of this research is to develop a risk assessment tool and a risk management tool for sepsis. In order to achieve this goal, two system dynamic mathematical models (SDMMs) are initially developed to predict dynamic patterns of sepsis progression in innate immunity and adaptive immunity. The two SDMMs are able to identify key indicators and key processes of inflammatory responses to an infection, and a sepsis progression. Second, an integrated-mathematical-multi-agent-based model (IMMABM) is developed to capture the stochastic nature embedded in the development of inflammatory responses to a sepsis. Unlike existing agent-based models, this agent-based model is enhanced by incorporating developed SDMMs and extensive experimental data. With the risk assessment tools, a Markov decision process (MDP) is proposed, as a risk management tool, to apply to clinical decision-makings on sepsis. With extensive computational studies, the major contributions of this research are to firstly develop risk assessment tools to identify the risk of sepsis development during the immune system responding to an infection, and secondly propose a decision-making framework to manage the risk of infected individuals dying from sepsis. The methodology and modeling framework used in this dissertation can be expanded to other disease situations and treatment applications, and have a broad impact to the research area related to computational modeling, biology, medical decision-making, and industrial engineering.
APA, Harvard, Vancouver, ISO, and other styles
22

Torchinsky, Raymon Lev. "Individual choice behaviour and urban commuting." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/27552.

Full text
Abstract:
Urban commuting patterns can be viewed as the spatial manifestation of the outcome of labour market processes. Recent theoretical and empirical work investigating urban labour markets has emphasized the role of spatial wage differentials in mediating the interrelationship between labour supply and demand distributions and the dynamics of land-use change. This thesis represents an extension of such research. A simulation approach to commuting modelling, based on the explicit characterization of the interrelationship between urban location and interaction in terms of labour market processes, is developed. The solution path logic of the simulation model is designed to provide normative commuting outcomes, given the spatial pattern of labour supply and demand, under a wide range of assumptions concerning labour market processes and choice-making behaviour of market participants. An explicit characterization of the labour market, based on the specification of an endogenous behavioural assumption set, defines a model version. Thus, the model may be used to test the ability of various behavioural constructs to explain empirical commuting patterns. The justification and internal logic underlying the development of a specific model version is presented. This version is based on the assumption that the decision by a worker to apply for a job is objectively rational, given that the market environment does not provide certainty as to the outcome of an application. It is shown that such choice behaviour is analogous to the game-theoretic mixed strategy solution to non-cooperative games under uncertainty. The algorithm of the operational model incorporating this approach is detailed. The model was tested on empirical commuting patterns derived from Vancouver Census data, and model results were compared with those obtained from a positive entropy-based model. Commuting predictions exhibited a level of accuracy comparable to that achieved by the calibrated entropy model.
Arts, Faculty of
Geography, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
23

Granovskiy, Boris. "Modeling Collective Decision-Making in Animal Groups." Doctoral thesis, Uppsala universitet, Matematiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180972.

Full text
Abstract:
Many animal groups benefit from making decisions collectively. For example, colonies of many ant species are able to select the best possible nest to move into without every ant needing to visit each available nest site. Similarly, honey bee colonies can focus their foraging resources on the best possible food sources in their environment by sharing information with each other. In the same way, groups of human individuals are often able to make better decisions together than each individual group member can on his or her own. This phenomenon is known as "collective intelligence", or "wisdom of crowds." What unites all these examples is the fact that there is no centralized organization dictating how animal groups make their decisions. Instead, these successful decisions emerge from interactions and information transfer between individual members of the group and between individuals and their environment. In this thesis, I apply mathematical modeling techniques in order to better understand how groups of social animals make important decisions in situations where no single individual has complete information. This thesis consists of five papers, in which I collaborate with biologists and sociologists to simulate the results of their experiments on group decision-making in animals. The goal of the modeling process is to better understand the underlying mechanisms of interaction that allow animal groups to make accurate decisions that are vital to their survival. Mathematical models also allow us to make predictions about collective decisions made by animal groups that have not yet been studied experimentally or that cannot be easily studied. The combination of mathematical modeling and experimentation gives us a better insight into the benefits and drawbacks of collective decision making, and into the variety of mechanisms that are responsible for collective intelligence in animals. The models that I use in the thesis include differential equation models, agent-based models, stochastic models, and spatially explicit models. The biological systems studied included foraging honey bee colonies, house-hunting ants, and humans answering trivia questions.
APA, Harvard, Vancouver, ISO, and other styles
24

Chavanasporn, Walailuck. "Application of stochastic differential equations and real option theory in investment decision problems." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1691.

Full text
Abstract:
This thesis contains a discussion of four problems arising from the application of stochastic differential equations and real option theory to investment decision problems in a continuous-time framework. It is based on four papers written jointly with the author’s supervisor. In the first problem, we study an evolutionary stock market model in a continuous-time framework where uncertainty in dividends is produced by a single Wiener process. The model is an adaptation to a continuous-time framework of a discrete evolutionary stock market model developed by Evstigneev, Hens and Schenk-Hoppé (2006). We consider the case of fix-mix strategies and derive the stochastic differential equations which determine the evolution of the wealth processes of the various market players. The wealth dynamics for various initial set-ups of the market are simulated. In the second problem, we apply an entry-exit model in real option theory to study concessionary agreements between a private company and a state government to run a privatised business or project. The private company can choose the time to enter into the agreement and can also choose the time to exit the agreement if the project becomes unprofitable. An early termination of the agreement by the company might mean that it has to pay a penalty fee to the government. Optimal times for the company to enter and exit the agreement are calculated. The dynamics of the project are assumed to follow either a geometric mean reversion process or geometric Brownian motion. A comparative analysis is provided. Particular emphasis is given to the role of uncertainty and how uncertainty affects the average time that the concessionary agreement is active. The effect of uncertainty is studied by using Monte Carlo simulation. In the third problem, we study numerical methods for solving stochastic optimal control problems which are linear in the control. In particular, we investigate methods based on spline functions for solving the two-point boundary value problems that arise from the method of dynamic programming. In the general case, where only the value function and its first derivative are guaranteed to be continuous, piecewise quadratic polynomials are used in the solution. However, under certain conditions, the continuity of the second derivative is also guaranteed. In this case, piecewise cubic polynomials are used in the solution. We show how the computational time and memory requirements of the solution algorithm can be improved by effectively reducing the dimension of the problem. Numerical examples which demonstrate the effectiveness of our method are provided. Lastly, we study the situation where, by partial privatisation, a government gives a private company the opportunity to invest in a government-owned business. After payment of an initial instalment cost, the private company’s investments are assumed to be flexible within a range [0, k] while the investment in the business continues. We model the problem in a real option framework and use a geometric mean reversion process to describe the dynamics of the business. We use the method of dynamic programming to determine the optimal time for the private company to enter and pay the initial instalment cost as well as the optimal dynamic investment strategy that it follows afterwards. Since an analytic solution cannot be obtained for the dynamic programming equations, we use quadratic splines to obtain a numerical solution. Finally we determine the optimal degree of privatisation in our model from the perspective of the government.
APA, Harvard, Vancouver, ISO, and other styles
25

Gourtani, Arash Mostajeran. "Stochastic and robust models for optimal decision making in energy." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/372272/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Stuart, Julie Ann. "A strategic environmentally conscious production decision model." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Moens, A. Alexander. "The multiple advocacy strategy and the role of the custodian : the Carter years." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29025.

Full text
Abstract:
The increasing complexity and high stakes of foreign policy decisions, especially of major powers such as the United States, have generated specialized studies of decision making. One approach, called "multiple advocacy," maps a strategy of role tasks and process norms to guide the decision-makers towards an optimal decision-making process. This process allows the President to make an informed policy choice as a result of having heard a variety of options debated freely and openly among his advisors in his presence. A crucial actor in this process is the National Security Advisor. As process manager or "custodian," he must ensure that the key provisions of the strategy are met while abstaining from personal involvement in the substance of policy advice and execution. This thesis examines the internal coherence and usefulness of the strategy. The first two years of the Carter administration provide a close approximation of the strategy. Four important policy issues during this period form the empirical basis of this test: the "Deep Cuts" proposals in SALT II, the war in the Horn of Africa, Sino-American Normalization, and the fall of the Shah of Iran. While the basic principles of the strategy are found useful and sound, several of its provisions are challenged. First, in spite of its claim, the strategy does not produce multiple options when the advisors have no wide divergence of opinion. Second, contrary to the strategy's prescriptions, the custodian can improve the process in such situations by joining the policy debate. Third, custodial engagement in activities such as diplomacy and public speaking need not be prohibited too strictly. Last, the demise of the strategy can be more narrowly defined as the result of custodial disregard for a free flow of information and open participation among the advisors. Though further studies are needed to widen the empirical base, several tentative suggestions are offered to improve the strategy. The president must insist on a reasonable range of opinions when appointing advisors. While the National Security Advisor may join the policy debate to widen the range of options, his policy advice should not become the rule. At all times the President must insist that all policy debates among his advisors be brought to his attention, and that all policy options receive a fair hearing.
Arts, Faculty of
Political Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
28

Izquierdo, Ángel Cabrera. "A functional analysis of categorization." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/30522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Case, Kelsey Kathryn. "From evidence to practice : the use of mathematical models to inform HIV programme planning and policy decision making." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/60855.

Full text
Abstract:
From early in the HIV epidemic, mathematical models have been used to understand patterns of infection and the potential for spread and can be a valuable tool to help inform strategic decisions. This thesis aims to investigate the use of mathematical models to inform HIV programme planning and policy decision making. This is done by examining key mathematical models used for this purpose, generating recommendations to advance the utility of these models, and investigating their use in the policy environment. Quantitative and qualitative methods from epidemiology, political science and social science are used to provide an integrated global health perspective. Mathematical models are first used to investigate the long-term epidemiological implications of different policy decisions for HIV prevention and treatment in the countries most affected by HIV. Next, they are used at the national level in a country application to produce short-term projections of incidence within the population. The results from the second model are used to frame a discussion which arose at the international level regarding its use and formulates recommendations for improved use. Finally, a descriptive multi-case study investigation is conducted in Malawi and Zambia exploring the use of mathematical models in guiding national policy with respect to HIV interventions. A qualitative approach drawing on principles from grounded theory is used and a theoretical framework is developed to guide and provide structure for the investigations. This framework views research utilisation as a spectrum and considers a range of different types of use across this continuum. This chapter describes the use of modelling within the policy environment, the key stakeholders involved, and identifies the barriers, facilitators and conditions for use of modelling to inform programme planning and decision making. Taken together, this thesis progresses from global to local, taking modelling beyond the research arena and into the policy environment.
APA, Harvard, Vancouver, ISO, and other styles
30

Bester, Margarete Joan. "Design of an automated decision support system for scheduling tasks in a generalized job-shop." Thesis, Stellenbosch : Stellenbosch University, 2006. http://hdl.handle.net/10019.1/21734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sprumont, Yves. "Three essays in collective choice theory." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/40872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sgroi, Daniel. "Theories of learning in economics." Thesis, University of Oxford, 2000. http://ora.ox.ac.uk/objects/uuid:b8d832af-57e7-45c2-a846-b69de3d25ec0.

Full text
Abstract:
How should we model learning behaviour in economic agents? This thesis addresses this question in two distinct ways. In the first set of chapters the assumption is that agents learn through the observation of others. They use Bayesian updating which together with specific informational assumptions can generate the problem known as herding with the potential for significant welfare losses. In the final set of chapters the agent is instead modelled as learning by example. Here the agent cannot learn by observing others, but has a pool of experience to fall back on. This allows us to examine how an economic agent will perform if he sees a particular economic situation (or game) for the first time, but has experience of playing related games. The tool used to capture the notion of learning through example is a neural network. Throughout the thesis the central theme is that economic agents will naturally use as much information as they can to help them make decisions. In many cases this should mean they take into consideration others' actions or their own experiences in similar but non-identical situations. Learning throughout the thesis will be rational or bounded-rational in the sense that either the best possible way to learn will be utilized (so players achieve full rational play, for example, through Bayesian updating), or a suitable local error-minimizing algorithm will be developed (for example, a rule of thumb which optimizes play in a subclass of games, but not in the overall set of possible games). Several themes permeate the whole thesis, including the scope for firms or planners to manipulate the information that is used by agents for their own ends, the role of rules of thumb, and the realism of current theories of learning in economics.
APA, Harvard, Vancouver, ISO, and other styles
33

Phan, Kenny. "Innovation Measurement: a Decision Framework to Determine Innovativeness of a Company." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1017.

Full text
Abstract:
Innovation is one of the most important sources of competitive advantage. It helps a company to fuel the growth of new products and services, sustain incumbents, create new markets, transform industries, and promote the global competitiveness of nations. Because of its importance, companies need to manage innovation. It is very important for a company to be able to measure its innovativeness because one cannot effectively manage without measurement. A good measurement model will help a company to understand its current capability and identify areas that need improvement. In this research a systematic approach was developed for a company to measure its innovativeness. The measurement of innovativeness is based on output indicators. Output indicators are used because they cannot be manipulated. A hierarchical decision model (HDM) was constructed from output indicators. The hierarchy consisted of three levels: innovativeness index, output indicators and sub-factors. Experts' opinions were collected and quantified. A new concept developed by Dr. Dundar Kocaoglu and referred to as "desirability functions" was implemented in this research. Inconsistency of individual experts, disagreement among experts, intraclass correlation coefficients and statistical F-tests were calculated to test the reliability of the experts' judgments. Sensitivity analysis was used to test the sensitivity of the output indicators, which indicated the allowable range of the changes in the output indicators in order to maintain the priority of the sub-factors. The outcome of this research is a decision model/framework that provides an innovativeness index based on readily measurable company output indicators. The model was applied to product innovation in the technology-driven semiconductor industry. Five hypothetical companies were developed to simulate the application of the model/framework. The profiles of the hypothetical companies were varied considerably to provide a deeper understanding of the model/framework. Actual data from two major corporations in the semiconductor industry were then used to demonstrate the application of the model. According to the experts, the top three sub-factors to measure the innovativeness of a company are revenue from new products (28%), market share of new products (21%), and products that are new to the world (20%).
APA, Harvard, Vancouver, ISO, and other styles
34

Warier, Prashant. "Dynamic Decision Support for Regional LTL Carriers." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16227.

Full text
Abstract:
This thesis focuses on decision support for regional LTL carriers. The basic operating characteristics of regional LTL carriers are similar to those of national LTL carriers, i.e., they operate linehaul networks with satellites, breakbulks, and relays to consolidate freight so as to be able to cost-effectively serve their customers. However, there are also key differences. Most importantly, because the area covered by a regional carrier is smaller, a regional carrier handles less freight (sometimes significantly less) and therefore typically has fewer consolidation opportunities, which results in higher handling and transportation costs per unit of freight. Consequently, competing with national carriers on price is difficult. Therefore, to gain or maintain market share, regional carriers have to provide better service. To be able to provide better service, regional carriers have to be more dynamic, e.g., they have to be able to deviate from their load plan when appropriate, which creates challenges for decision makers. Regional carriers deliver about 60% of their shipments within a day and almost all of their shipments within two days. Furthermore, most drivers get back to their domicile at the end of each day. Therefore, the focus of the thesis is the development of effective and efficient decision models supporting daily operations of regional LTL carriers which provide excellent service at low cost. This thesis presents an effective solution approach based on two optimization models: a dynamic load planning model and a driver assignment model. The dynamic load planning model consists of two parts: an integer program to generate the best paths for daily origin-destination freight volumes and an integer program to pack freight into trailers and trailers into loads, and to determine dispatch times for these loads. Techniques to efficiently solve these integer program solution are discussed in detail. The driver assignment model is solved in multiple stages, each stage requiring the solution of a set packing models in which columns represent driver duties. Each stages determines admissible driver duties. The quality and efficiency of the solution approach are demonstrated through a computational study with real-life data from one of the largest regional LTL carriers in the country. An important "technique" for reducing driver requirements is the use of meet-and-turn operations. A basic meet-and-turn operation involves two drivers meeting at a location in between terminals and exchange trucks. A parking lot or a rest area suffices as a meet-and-turn location. This ensures that drivers return to the terminal where they started. More sophisticated meet-and-turn operations also exist, often called drop and hook operations. In this case, drivers do not exchange trucks, but one of their trailers. The motivation in this case is not to get drivers back to their domicile, but to reduce load- miles. The thesis presents analytical results quantifying the maximum benefits of using meet and turn operations and optimization techniques for identifying profitable meet-and-turn opportunities.
APA, Harvard, Vancouver, ISO, and other styles
35

Ozdoglar, Mehmet Rasit. "Assessment Of Criteria-rich Rankings For Decision Makers." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611509/index.pdf.

Full text
Abstract:
Environmental policymaking is a difficult issue for governments. It is desirable to have the decisions based on the results of quantitative and analytical studies. On the other hand, by their very nature, many such decisions have political aspects, whose subtleties are difficult to be captured by quantitative approaches alone. It is left to the political establishments to decide how best to allocate the efforts to improve environmental conditions. In this respect, evaluating the countries by generating environmental indices and the subsequent ranking of the countries with respect to those indices is a common practice. Perhaps the best known environmental sustainability index, the Environmental Performance Index-2008 (EPI-2008), is a composite index that comprises 6 core policy categories and 25 indicators. While recognizing the qualitative aspects of such decision making, in order to support and guide the policymaking process, we develop analytical tools to assist the process. We carefully delineate our models to be limited only to the provable quantitative properties of the available objective data. However, such data are processed into more meaningful statements concerning the available options. Specifically, using EPI-2008, meaningful mathematical models that shed further light onto the country sustainability measures are developed.
APA, Harvard, Vancouver, ISO, and other styles
36

Thompson, Stephanie C. "Rational design theory: a decision-based foundation for studying design methods." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39490.

Full text
Abstract:
While design theories provide a foundation for representing and reasoning about design methods, existing design theories do not explicitly include uncertainty considerations or recognize tradeoffs between the design artifact and the design process. These limitations prevent the existing theories from adequately describing and explaining observed or proposed design methods. In this thesis, Rational Design Theory is introduced as a normative theoretical framework for evaluating prescriptive design methods. This new theory is based on a two-level perspective of design decisions in which the interactions between the artifact and the design process decisions are considered. Rational Design Theory consists of normative decision theory applied to design process decisions, and is complemented by a decision-theory-inspired conceptual model of design. The application of decision analysis to design process decisions provides a structured framework for the qualitative and quantitative evaluation of design methods. The qualitative evaluation capabilities are demonstrated in a review of the systematic design method of Pahl and Beitz. The quantitative evaluation capabilities are demonstrated in two example problems. In these two quantitative examples, Value of Information analysis is investigated as a strategy for deciding when to perform an analysis to gather additional information in support of a choice between two design concepts. Both quantitative examples demonstrate that Value of Information achieves very good results when compared to a more comprehensive decision analysis that allows for a sequence of analyses to be performed.
APA, Harvard, Vancouver, ISO, and other styles
37

Lee, Chanjoo. "Analysis of decision-making in closed-loop supply chains." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44925.

Full text
Abstract:
Closed-loop supply chains (CLSCs) that integrate the activities for reclaiming residual values in postconsumer products with the traditional forward supply chain activities are important from financial and environmental perspectives. This thesis develops models and analyses on three topics novel to the field of CLSC research with a goal of advancing knowledge about effective decision-makings in CLSCs. In the first part of the thesis, we study joint control of stochastic forward and stochastic reverse material flows in CLSCs. With an application to a CLSC where postconsumer products are collected for warranty service purposes, we demonstrate that the benefit of coordinating two production activities could be significant. We develop a model that can be used to obtain an effective inventory control policy for coordinating forward and reverse material flows. Through Monte Carlo simulation and global sensitivity analysis, we identify major influential factors that affect system's warranty cost savings performance. The results indicate that joint control of forward and reverse material flows greatly improves warranty cost savings performance as well as system's robustness to uncertainties. The second part of the thesis develops a differential game model for characterizing decentralized time-varying competitive decision-making in a CLSC. The differential game model is particularly useful for studying time-varying interactive decision-making in CLSCs that involve many stakeholders who pursue different objectives in forward and reverse production activities. We identify optimal prices and production strategies that evolve over time under fluctuating market demand. Also, the model provides a quantitative scheme that can be used to obtain an efficient apportionment of product recovery processes. The third part of the thesis describes the relationship among consumers' risk-aversion, product cannibalization of new products by remanufactured products, and growth of CLSCs through price optimization models. Whereas price is one of the most effective variables for managing market demand, previous CLSC research has mainly focused on operational problems without paying much attention on the interface between CLSCs and markets. We develop models that jointly determine optimal prices in forward and reverse channels considering consumers' willingness-to-pay (WTP) for remanufactured products, consumers' willingness-to-accept (WTA) for a buyback price, and consumers' risk aversion to uncertain quality perceptions. The results show that consumers' active participation in CLSC is an important factor for the viability and growth of a CLSC. Also, we show that companies can benefit from product remanufacturing although it may be accompanied by production cannibalization.
APA, Harvard, Vancouver, ISO, and other styles
38

SERPA, FLAVIA GARCIA. "MATHEMATICAL MODEL TO SUPPORT DECISION MAKING IN THE PURCHASE AND DISTRIBUTION OF FLOWLINES AND UMBILICALS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21206@1.

Full text
Abstract:
A atividade de exploração e produção (EeP) de petróleo no segmento offshore no Brasil apresentou um crescimento vertiginoso nos últimos anos e, com a descoberta do pólo pré-sal da bacia de Santos pela PETROBRAS, a expectativa é de que haja um incremento ainda maior. Dentro da cadeia produtiva de EeP, uma etapa fundamental para garantir a produção de petróleo é a interligação dos equipamentos submarinos às Unidades Estacionárias de Produção. Esta interligação é feita através de dutos (rígidos ou flexíveis) e umbilicais, cujo mercado fornecedor é bastante restrito. Para que estes dutos e umbilicais sejam transportados até a locação, é necessário utilização de bases, que atuam como centro de distribuição na cadeia logística. Este trabalho apresenta um modelo de Programação Linear Inteira Mista cujo objetivo é de apoiar a decisão na compra e distribuição de dutos flexíveis e umbilicais, indicando, para cada demanda dos projetos, qual fábrica e qual base deverá ser utilizada. O modelo também permite uma análise sobre a influência do aumento na capacidade de produção no Brasil em relação aos custos logísticos e fabricação nacional. Como resultado desta análise, por exemplo, poderá ser observado que, em não havendo aumento da capacidade de produção no Brasil, além do que já foi previsto, as compras no mercado nacional, no período 2013 a 2016, representarão em torno de 70 por cento da demanda.
Offshore oil exploration and production activities in Brazil have presented an enormous rise in recent years and, following Petrobras s discovery of pre-salt accumulations in Campos Basin, an even greater rise is expected. Inside EeP production chain, a fundamental step to achieve oil production is the connection of the underwater equipment to the stationary production units. This connection is made through the use of flowlines (both flexible and rigid) and umbilicals, which have a strict vendor market. In order to transport these flowlines and umbilicals to the well location, the use of an onshore base is required, acting like a distribution center in the logistics chain. This paper presents a model for Mixed Integral Linear Programming which aims to support the decision making process in the purchase and distribution of flowlines and umbilicals, indicating, for each project s demand, which factory and which onshore base shall be used. The model also allows an analysis of the influence of the increase in the production capacity related to logistics cost and national manufacturing. As a result, for example, the national market should account for 70 per cent of the demand, between the years 2013 and 2016.
APA, Harvard, Vancouver, ISO, and other styles
39

Weingartner, Stephan G. "System development : an algorithmic approach." Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/483077.

Full text
Abstract:
The subject chosen to develop this thesis project on is developing an algorithm or methodology for system selection. The specific problem studied involves a procedure to determine anion computer system alternative is the best choice for a given user situation.The general problem to be addressed is the need for one to choose computing hardware, software, systems, or services in a -Logical approach from a user perspective, considering cost, performance and human factors. Most existing methods consider only cost and performance factors, combining these factors in ad hoc, subjective fashions to react: a selection decision. By not considering factors treat measure effectiveness and functionality of computer services for a user, existing methods ignore some of the most important measures of value to the user.In this work, a systematic and comprehensive approach to computer system selection has been developed. Also developed were methods for selecting and organizing various criteria.Also ways to assess the importance and value of different service attributes to a end-user are discussed.Finally, the feasibility of a systematic approach to computer system selection has been proven by establishing a general methodology and by proving it through a demonstration of a specific application.
APA, Harvard, Vancouver, ISO, and other styles
40

Juszczuk, Agnieszka Beata, and Evgeniya Tkacheva. "Revision Moment for the Retail Decision-Making System." Thesis, Högskolan i Halmstad, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6191.

Full text
Abstract:
In this work we address to the problems of the loan origination decision-making systems. In accordance with the basic principles of the loan origination process we considered the main rules of a clients parameters estimation, a change-point problem for the given data and a disorder moment detection problem for the real-time observations. In the first part of the work the main principles of the parameters estimation are given. Also the change-point problem is considered for the given sample in the discrete and continuous time with using the Maximum likelihood method. In the second part of the work the disorder moment detection problem for the real-time observations is considered as a disorder problem for a non-homogeneous Poisson process. The corresponding optimal stopping problem is reduced to the free-boundary problem with a complete analytical solution for the case when the intensity of defaults increases. Thereafter a scheme of the real time detection of a disorder moment is given.
APA, Harvard, Vancouver, ISO, and other styles
41

Mousumi, Fouzia Ashraf. "Exploiting the probability of observation for efficient Bayesian network inference." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, 2013. http://hdl.handle.net/10133/3457.

Full text
Abstract:
It is well-known that the observation of a variable in a Bayesian network can affect the effective connectivity of the network, which in turn affects the efficiency of inference. Unfortunately, the observed variables may not be known until runtime, which limits the amount of compile-time optimization that can be done in this regard. This thesis considers how to improve inference when users know the likelihood of a variable being observed. It demonstrates how these probabilities of observation can be exploited to improve existing heuristics for choosing elimination orderings for inference. Empirical tests over a set of benchmark networks using the Variable Elimination algorithm show reductions of up to 50% and 70% in multiplications and summations, as well as runtime reductions of up to 55%. Similarly, tests using the Elimination Tree algorithm show reductions by as much as 64%, 55%, and 50% in recursive calls, total cache size, and runtime, respectively.
xi, 88 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
42

Kim, Joocheol. "Stochastic programming approach to asset liability management under uncertainty." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/25324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Lihua Built Environment Faculty of Built Environment UNSW. "The integration of geographical information systems and multicriteria decision making models for the analysis of branch bank closures." Awarded by:University of New South Wales. School of Built Environment, 2002. http://handle.unsw.edu.au/1959.4/33239.

Full text
Abstract:
The research presented in this Thesis is primarily concerned with the field of Geographical Information Systems (GIS) - specifically, the business applications of the technology. The empirical problem addressed is the selection of branch banks as candidates for closure using the network of branch banks of the Commonwealth Bank of Australia in the Sydney metropolitan region as the case study. Decisions to close branches are made by the Bank on the basis of performance indicators that are essentially financial. In this research, however, an alternative approach is adopted: the problem is addressed using a set of spatial criteria. Following the deregulation of the finance industry in the 1980's and the rapid introduction of new electronic channels for delivering financial services, the major banking institutions have been engaged in a process of reorganising their networks of branch banks. The most visible manifestation of this has been the ongoing and widespread closure of branches. Selecting branch banks for closure is a typical example of a complex semi-structured multi-dimensional, multi-criteria, decision-making problem. It has been well documented in previous research that Multi-Criteria Decision-Making (MCDM) models are the most appropriate ones for solving problems in this particular domain. The identification of branches for closure is also characterised by a significant spatial dimension. Decisions are based on a consideration of a number of geographical criteria and various forms of spatial analysis may be involved. An appropriate technology to assist with solving decision-making problems with a significant spatial dimension is a Spatial Decision Support System (SDSS). Most SDSS have been based on the integration of Geographical Information Systems (GIS) technology with analytical models that are proven to be best suited to specific decision-making problems and this is the approach adopted in this research. The prototype MCBC-SDSS (Multi-Criteria Branch Closure SDSS) developed here is based on the integration through the loose coupling of the ArcView GIS software with the Criterium DecisionPlus (CDP) software, which contains the suite of non-spatial analytical models that provide the analytical capability for solving multi-criteria problems. ArcView GIS is used as the engine that drives the system and to provide the analytical and display facilities to support the spatial data involved. Two MCDM models from the CDP software are used to support the decision-making analysis - the Analytical Hierarchy Process (AHP) and Simple Multi-Attribute Rating Technique (SMART). The integration of GIS with the MCDM models is based on a considerable amount of software enhancement, interface development, and computer programming. The development of the integrated system is designed to create an intelligent and user-friendly SDSS, the application of which, from the user's perspective, is a seamless operation. The success of the MCBC-SDSS is demonstrated by its application to identify candidates for closure among the 197 branches of the CBA in the Sydney metropolitan area in 2000 - the year when the building of the database for the research had been completed. The analysis is based purely on spatial considerations that have been gleaned from a major review of the literature that previous researchers have identified as affecting branch viability and performance. A set of 17 spatial variables was used as the criteria in the MCDM models. The criteria are organised in two blocks: the first includes 9 criteria relating to the characteristics of demand for branch service in the branch trade areas ('catchment area' specific criteria) while the second includes 8 criteria relating to aspects of supply provided by the existing branches in their location ('location specific' criteria). Using the developed approach, the MCBC-SDSS has been used directly to compare alternatives against criteria, not only spatial based but also financial ones, thus providing a basis for identifying the best choices regarding branch closure. The steps in the preparation of the data and the iterative procedure for implementing the MCDM models are explained and illustrated. This involves building the initial evaluation matrix, normalising the raw criteria scores, assigning weights to the criteria, and calculating priorities. Based on these, the AHP and SMART models then calculate a decision score for each branch that is used as the basis for creating the preference ranking of the branches. In this, branches with a high rank score based on the combined weighted contribution of the 17 criteria are considered to be operationally viable. On the other hand, branches with the lowest rank scores are considered as potential candidates for closure. The preference rankings generated by the models have been tested to examine their robustness in terms of the validity of criteria and their weights used in the decision analysis. Sensitivity analysis has been conducted, the results of which show that the preference rankings are stable. Different approaches have been used to validate the initial criteria, and analyse their contribution to the ranking of branch banks for closure. These help identify critical spatial variables among the 17 initial criteria selected, and suggest that some of the criteria initially selected could be deleted from the criteria list used to generate the preference rankings without substantially affecting the results. The reasonableness of the resulting preference ranking has been further demonstrated from analyses based on changing criteria weights and alternatives. The research successfully demonstrates one of the ways of enhancing the functionality of a GIS through its integration with non-spatial analytical models to develop a SDSS to aid solving decision-making problems in the selected domain. Given that to date there has been relatively few applications of SDSS similar to that developed in this research to real world decision-making problems, the procedure adopted makes it suitable for decision-making in a range of other service business applications characterised by a significant spatial dimension and multiple outlets including shopping centres, motor car dealerships, restaurant and supermarket chains. Instead of just providing solutions, however, the SDSS-based analysis in this research can better be thought of as adding value to spatial data that forms an important source of information required by decision-makers, providing insight about the situation, uncertainty, objectives, and trade-offs involved in reaching decisions, and being capable of generating alternative scenarios based on different inputs to the models that may be used to identify recommended courses of action. It can lead to better and more effective decision-making in institutions involving multi-outlet retail and service businesses and hence enables both integrated data analysis and modelling while taking multiple criteria and decision-makers' preferences into consideration.
APA, Harvard, Vancouver, ISO, and other styles
44

Yao, Yufeng. "Topics in Fractional Airlines." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14563.

Full text
Abstract:
Fractional aircraft ownership programs offer companies and individuals all the benefits of owning private jet, such as safety, consistency, and guaranteed availability, at a fraction of the cost of owning an aircraft. In the fractional ownership model, the partial owners of an aircraft are entitled to certain number of hours per year, and the management company is responsible for all the operational considerations and making sure an aircraft is available to the owners at the requested time and location. This thesis research proposes advance optimization techniques to help the management company to optimally operate its available resources and provides tools for strategic decision making. The contributions of this thesis are: (i) The development of optimization methodologies to assign and schedule aircraft and crews so that all flight requests are covered at the lowest possible cost. First, a simple model is developed to solve the crew pairing and aircraft routing problem with column generation assuming that a crew stays with one specific aircraft during its duty period. Secondly, this assumption is partially relaxed to improve resource utilization by revising the simple model to allow a crew to use another aircraft when its original aircraft goes under long maintenance. Thirdly, a new comprehensive model utilizing Benders decomposition technique and a fleet-station time line is proposed to completely relax the assumption that crew stays with one specific aircraft. It combines the fleet assignment, aircraft routing, and crew pairing problems. In the proposed methodologies, real world details are taken into consideration, such as crew transportation and overtime costs, scheduled and unscheduled maintenance effects, crew rules, and the presence of non-crew-compatible fleets. Scheduling with time windows is also discussed. (ii) The analysis of operational strategies to provide decision making support. Scenario analyses are performed to provide insights on improving business profitability and aircraft availability, such as impact of aircraft maintenance, crew swapping, effect of increasing demand by Jet-card and geographical business expansion, size of company owned aircraft, and strategies to deal with the stochastic feature of unscheduled maintenance and demand.
APA, Harvard, Vancouver, ISO, and other styles
45

Duan, Chunming. "A unified decision analysis framework for robust system design evaluation in the face of uncertainty." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-170155/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mathur, Kush. "Mathematical Models and Genetic Algorithm Approaches to Simultaneously Perform Workforce Overtime Capacity Planning and Schedule Cells." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1351306927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Shen, Yunxiang. "Risk analysis and its application in mining project evaluation." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=64009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Payan, Alexia Paule Marie-Renee. "Enabling methods for the design and optimization of detection architectures." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47688.

Full text
Abstract:
The surveillance of geographic borders and critical infrastructures using limited sensor capability has always been a challenging task in many homeland security applications. While geographic borders may be very long and may go through isolated areas, critical assets may be large and numerous and may be located in highly populated areas. As a result, it is virtually impossible to secure each and every mile of border around the country, and each and every critical infrastructure inside the country. Most often, a compromise must be made between the percentage of border or critical asset covered by surveillance systems and the induced cost. Although threats to homeland security can be conceived to take place in many forms, those regarding illegal penetration of the air, land, and maritime domains under the cover of day-to-day activities have been identified to be of particular interest. For instance, the proliferation of drug smuggling, illegal immigration, international organized crime, resource exploitation, and more recently, modern piracy, require the strengthening of land border and maritime awareness and increasingly complex and challenging national security environments. The complexity and challenges associated to the above mission and to the protection of the homeland may explain why a methodology enabling the design and optimization of distributed detection systems architectures, able to provide accurate scanning of the air, land, and maritime domains, in a specific geographic and climatic environment, is a capital concern for the defense and protection community. This thesis proposes a methodology aimed at addressing the aforementioned gaps and challenges. The methodology particularly reformulates the problem in clear terms so as to facilitate the subsequent modeling and simulation of potential operational scenarios. The needs and challenges involved in the proposed study are investigated and a detailed description of a multidisciplinary strategy for the design and optimization of detection architectures in terms of detection performance and cost is provided. This implies the creation of a framework for the modeling and simulation of notional scenarios, as well as the development of improved methods for accurate optimization of detection architectures. More precisely, the present thesis describes a new approach to determining detection architectures able to provide effective coverage of a given geographical environment at a minimum cost, by optimizing the appropriate number, types, and locations of surveillance and detection systems. The objective of the optimization is twofold. First, given the topography of the terrain under study, several promising locations are determined for each sensor system based on the percentage of terrain it is covering. Second, architectures of sensor systems able to effectively cover large percentages of the terrain at minimal costs are determined by optimizing the number, types and locations of each detection system in the architecture. To do so, a modified Genetic Algorithm and a modified Particle Swarm Optimization are investigated and their ability to provide consistent results is compared. Ultimately, the modified Particle Swarm Optimization algorithm is used to obtain a Pareto frontier of detection architectures able to satisfy varying customer preferences on coverage performance and related cost.
APA, Harvard, Vancouver, ISO, and other styles
49

Dufresne, Stephane. "A hierarchical modeling methodology for the definition and selection of requirements." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24755.

Full text
Abstract:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Mavris, Dimitri; Committee Member: Bishop, Carlee; Committee Member: Costello, Mark; Committee Member: Nickol, Craig; Committee Member: Schrage, Daniel
APA, Harvard, Vancouver, ISO, and other styles
50

Gust, Jeffrey Allen. "Assessment centers and group decision making: Substituting the arithmetic mean for the traditional consensus discussion." CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography