Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Decision refinement.

Thèses sur le sujet « Decision refinement »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 18 meilleures thèses pour votre recherche sur le sujet « Decision refinement ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Aphale, Mukta S. « Intelligent agent support for policy authoring and refinement ». Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225826.

Texte intégral
Résumé :
A policy (or norm) can be defined as a guideline stating what is allowed, what is forbidden and what is obligated for an entity, in a certain situation, so that an acceptable outcome is achieved. Policies occur in many types of scenarios, whether they are loose social networks of individuals or highly structured institutions. It is important, however, for policies to be consistent and to support the goals of organisations they govern. This requires a thorough understanding of the implications of introducing specific policies and how they interact. It is difficult, even for experts, to write consistent, unambiguous and accurate policies, and conflicts are practically unavoidable. At the same time conflicts may vary in significance. For example, some conflicts are most likely to occur, some conflicts may lead to high cost for goal achievement while some conflicts may lead to severe obstacles in the achievement of goals. Such conflicts are the most significant for the domain and goals of organisation. Resolution of conflicts that will clear the obstacles in the goal achievement and will maximize the benefits received must be prioritised. In order to resolve conflicts and refine policies; it is crucial to understand the implications of policies, conflicts and resolutions in terms of goal achievement and benefits to organisation. There exist huge number of policies and conflicts occurring within any organisation. Human decision makers are most likely to be cognitively overloaded. Making is difficult for them to decide which conflicts to prioritise in order to successfully achieve goals while maximizing benefits. Automated reasoning mechanisms can effectively support human decision makers in this process. In this thesis, we have addressed the problem of developing effective automated reasoning support for the detection and resolution of conflicts between plans (to achieve a given goal) and policies. We also present an empirical evaluation of a model of conflict detection and prioritisation through experiments with human users. Our empirical evaluations prove that providing guidance to users regarding what conflicts to prioritise and highlighting related conflicts lead to higher quality outcomes, thereby achieving goals successfully and rapidly.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ramachandran, Sowmya. « Theory refinement of Bayesian networks with hidden variables / ». Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Mazzarella, Fabio. « The Unlucky broker ». Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/365.

Texte intégral
Résumé :
2010 - 2011
This dissertation collects results of the work on the interpretation, characteri- zation and quanti cation of a novel topic in the eld of detection theory -the Unlucky Broker problem-, and its asymptotic extension. The same problem can be also applied to the context of Wireless Sensor Networks (WSNs). Suppose that a WSN is engaged in a binary detection task. Each node of the system collects measurements about the state of the nature (H0 or H1) to be discovered. A common fusion center receives the observations from the sensors and implements an optimal test (for example in the Bayesian sense), exploiting its knowledge of the a-priori probabilities of the hypotheses. Later, the priors used in the test are revealed to be inaccurate and a rened pair is made available. Unfortunately, at that time, only a subset of the original data is still available, along with the original decision. In the thesis, we formulate the problem in statistical terms and we consider a system made of n sensors engaged in a binary detection task. A successive reduction of data set's cardinality occurs and multiple re nements are required. The sensors are devices programmed to take the decision from the previous node in the chain and the available data, implement some simple test to decide between the hypotheses, and forward the resulting decision to the next node. The rst part of the thesis shows that the optimal test is very di cult to be implemented even with only two nodes (the unlucky broker problem), because of the strong correlation between the available data and the decision coming from the previous node. Then, to make the designed detector implementable in practice and to ensure analytical tractability, we consider suboptimal local tests. We choose a simple local decision strategy, following the rationale ruling the optimal detector solving the unlucky broker problem: A decision in favor of H0 is always retained by the current node, while when the decision of the previous node is in favor of H1, a local log-likelihood based test is implemented. The main result is that, asymptotically, if we set the false alarm probability of the rst node (the one observing the full data set) the false alarm probability decreases along the chain and it is non zero at the last stage. Moreover, very surprisingly, the miss detection probability decays exponentially fast with the root square of the number of nodes and we provide its closed-form exponent, by exploiting tools from random processes and information theory. [edited by the author]
X n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sarigul, Erol. « Interactive Machine Learning for Refinement and Analysis of Segmented CT/MRI Images ». Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/25954.

Texte intégral
Résumé :
This dissertation concerns the development of an interactive machine learning method for refinement and analysis of segmented computed tomography (CT) images. This method uses higher-level domain-dependent knowledge to improve initial image segmentation results. A knowledge-based refinement and analysis system requires the formulation of domain knowledge. A serious problem faced by knowledge-based system designers is the knowledge acquisition bottleneck. Knowledge acquisition is very challenging and an active research topic in the field of machine learning and artificial intelligence. Commonly, a knowledge engineer needs to have a domain expert to formulate acquired knowledge for use in an expert system. That process is rather tedious and error-prone. The domain expert's verbal description can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. In many cases, the domain experts prefer to do actions instead of explaining their expertise. These problems motivate us to find another solution to make the knowledge acquisition process less challenging. Instead of trying to acquire expertise from a domain expert verbally, we can ask him/her to show expertise through actions that can be observed by the system. If the system can learn from those actions, this approach is called learning by demonstration. We have developed a system that can learn region refinement rules automatically. The system observes the steps taken as a human user interactively edits a processed image, and then infers rules from those actions. During the system's learn mode, the user views labeled images and makes refinements through the use of a keyboard and mouse. As the user manipulates the images, the system stores information related to those manual operations, and develops internal rules that can be used later for automatic postprocessing of other images. After one or more training sessions, the user places the system into its run mode. The system then accepts new images, and uses its rule set to apply postprocessing operations automatically in a manner that is modeled after those learned from the human user. At any time, the user can return to learn mode to introduce new training information, and this will be used by the system to updates its internal rule set. The system does not simply memorize a particular sequence of postprocessing steps during a training session, but instead generalizes from the image data and from the actions of the human user so that new CT images can be refined appropriately. Experimental results have shown that IntelliPost improves the segmentation accuracy of the overall system by applying postprocessing rules. In tests two different CT datasets of hardwood logs, the use of IntelliPost resulted in improvements of 1.92% and 9.45%, respectively. For two different medical datasets, the use of IntelliPost resulted in improvements of 4.22% and 0.33%, respectively.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Arrufat, Ondina. « The refinement and validation of the critical decision making and problem solving scale moral dilema (CDP-MD) ». FIU Digital Commons, 1995. http://digitalcommons.fiu.edu/etd/1426.

Texte intégral
Résumé :
This thesis extended previous research on critical decision making and problem solving by refining and validating a measure designed to assess the use of critical thinking and critical discussion in sociomoral dilemmas. The purpose of this thesis was twofold: 1) to refine the administration of the Critical Thinking Subscale of the CDP to elicit more adequate responses and for purposes of refining the coding and scoring procedures for the total measure, and 2) to collect preliminary data on the initial reliabilities of the measure. Subjects consisted of 40 undergraduate students at Florida International University. Results indicate that the use of longer probes on the Critical Thinking Subscale was more effective in eliciting adequate responses necessary for coding and evaluating the subjects performance. Analyses on the psychometric properties of the measure consisted of test-retest reliability and inter-rater reliability.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wolf, Lisa Adams. « Testing and refinement of an integrated, ethically-driven environmental model of clinical decision-making in emergency settings ». Thesis, Boston College, 2011. http://hdl.handle.net/2345/2224.

Texte intégral
Résumé :
Thesis advisor: Dorothy A. Jones
Thesis advisor: Pamela J. Grace
The purpose of the study was to explore the relationship between multiple variables within a model of critical thinking and moral reasoning that support and refine the elements that significantly correlate with accuracy and clinical decision-making. Background: Research to date has identified multiple factors that are integral to clinical decision-making. The interplay among suggested elements within the decision making process particular to the nurse, the patient, and the environment remain unknown. Determining the clinical usefulness and predictive capacity of an integrated ethically driven environmental model of decision making (IEDEM-CD) in emergency settings in facilitating accuracy in problem identification is critical to initial interventions and safe, cost effective, quality patient care outcomes. Extending the literature of accuracy and clinical decision making can inform utilization, determination of staffing ratios, and the development of evidence driven care models. Methodology: The study used a quantitative descriptive correlational design to examine the relationships between multiple variables within the IEDEM-CD model. A purposive sample of emergency nurses was recruited to participate in the study resulting in a sample size of 200, calculated to yield a power of 0.80, significance of .05, and a moderate effect size. The dependent variable, accuracy in clinical decision-making, was measured by scores on clinical vignettes. The independent variables of moral reasoning, perceived environment of care, age, gender, certification in emergency nursing, educational level, and years of experience in emergency nursing, were measures by the Defining Issues Test, version 2, the Revised Professional Practice Environment scale, and a demographic survey. These instruments were identified to test and refine the elements within the IEDEM-CD model. Data collection occurred via internet survey over a one month period. Rest's Defining Issues Test, version 2 (DIT-2), the Revised Professional Practice Environment tool (RPPE), clinical vignettes as well as a demographic survey were made available as an internet survey package using Qualtrics TM. Data from each participant was scored and entered into a PASW database. The analysis plan included bivariate correlation analysis using Pearson's product-moment correlation coefficients followed by chi square and multiple linear regression analysis. Findings: The elements as identified in the IEDEM-CD model supported moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. Findings reported that in complex clinical situations, higher levels of moral reasoning significantly affected accuracy in problem identification. Attributes of the environment of care including teamwork, communication about patients, and control over practice also significantly affected nurses' critical cue recognition and selection of appropriate interventions. Study results supported the conceptualization of the IEDEM-CD model and its usefulness as a framework for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy
Thesis (PhD) — Boston College, 2011
Submitted to: Boston College. Connell School of Nursing
Discipline: Nursing
Styles APA, Harvard, Vancouver, ISO, etc.
7

Raghavan, Venkatesh. « Supporting Multi-Criteria Decision Support Queries over Disparate Data Sources ». Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/120.

Texte intégral
Résumé :
In the era of "big data revolution," marked by an exponential growth of information, extracting value from data enables analysts and businesses to address challenging problems such as drug discovery, fraud detection, and earthquake predictions. Multi-Criteria Decision Support (MCDS) queries are at the core of big-data analytics resulting in several classes of MCDS queries such as OLAP, Top-K, Pareto-optimal, and nearest neighbor queries. The intuitive nature of specifying multi-dimensional preferences has made Pareto-optimal queries, also known as skyline queries, popular. Existing skyline algorithms however do not address several crucial issues such as performing skyline evaluation over disparate sources, progressively generating skyline results, or robustly handling workload with multiple skyline over join queries. In this dissertation we thoroughly investigate topics in the area of skyline-aware query evaluation. In this dissertation, we first propose a novel execution framework called SKIN that treats skyline over joins as first class citizens during query processing. This is in contrast to existing techniques that treat skylines as an "add-on," loosely integrated with query processing by being placed on top of the query plan. SKIN is effective in exploiting the skyline characteristics of the tuples within individual data sources as well as across disparate sources. This enables SKIN to significantly reduce two primary costs, namely the cost of generating the join results and the cost of skyline comparisons to compute the final results. Second, we address the crucial business need to report results early; as soon as they are being generated so that users can formulate competitive decisions in near real-time. On top of SKIN, we built a progressive query evaluation framework ProgXe to transform the execution of queries involving skyline over joins to become non-blocking, i.e., to be progressively generating results early and often. By exploiting SKIN's principle of processing query at multiple levels of abstraction, ProgXe is able to: (1) extract the output dependencies in the output spaces by analyzing both the input and output space, and (2) exploit this knowledge of abstract-level relationships to guarantee correctness of early output. Third, real-world applications handle query workloads with diverse Quality of Service (QoS) requirements also referred to as contracts. Time sensitive queries, such as fraud detection, require results to progressively output with minimal delay, while ad-hoc and reporting queries can tolerate delay. In this dissertation, by building on the principles of ProgXe we propose the Contract-Aware Query Execution (CAQE) framework to support the open problem of contract driven multi-query processing. CAQE employs an adaptive execution strategy to continuously monitor the run-time satisfaction of queries and aggressively take corrective steps whenever the contracts are not being met. Lastly, to elucidate the portability of the core principle of this dissertation, the reasoning and query processing at different levels of data abstraction, we apply them to solve an orthogonal research question to auto-generate recommendation queries that facilitate users in exploring a complex database system. User queries are often too strict or too broad requiring a frustrating trial-and-error refinement process to meet the desired result cardinality while preserving original query semantics. Based on the principles of SKIN, we propose CAPRI to automatically generate refined queries that: (1) attain the desired cardinality and (2) minimize changes to the original query intentions. In our comprehensive experimental study of each part of this dissertation, we demonstrate the superiority of the proposed strategies over state-of-the-art techniques in both efficiency, as well as resource consumption.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Darracott, Rosalyn M. « The development and refinement of the practice domain framework as a conceptual tool for understanding and guiding social care practice ». Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/86048/15/86048.pdf.

Texte intégral
Résumé :
This study identified the common factors that influence social care practice across disciplines (such as social work and psychology), practice fields, and geographical contexts and further developed the Practice Domain Framework as an empirically-based conceptual framework to assist practitioners in understanding practice complexities. The framework has application in critical reflection, professional supervision, interdisciplinary understanding, teamwork, management, teaching and research. A mixed-methods design was used to identify the components and structure of the refined framework. Eighteen influential factors were identified and organised into eight domains: the Societal, Structural, Organisational, Practice Field, Professional Practice, Accountable Practice, Community of Place, and Personal.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Molinari, David U. « A psychometric examination and refinement of the Canadian Forces Attrition Information Questionnaire, CFAIQ, comparing the reasons cited by anglophones and francophones in the Leave decision process ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq20843.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

El, Khalfi Zeineb. « Lexicographic refinements in possibilistic sequential decision-making models ». Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30269/document.

Texte intégral
Résumé :
Ce travail contribue à la théorie de la décision possibiliste et plus précisément à la prise de décision séquentielle dans le cadre de la théorie des possibilités, à la fois au niveau théorique et pratique. Bien qu'attrayante pour sa capacité à résoudre les problèmes de décision qualitatifs, la théorie de la décision possibiliste souffre d'un inconvénient important : les critères d'utilité qualitatives possibilistes comparent les actions avec les opérateurs min et max, ce qui entraîne un effet de noyade. Pour surmonter ce manque de pouvoir décisionnel, plusieurs raffinements ont été proposés dans la littérature. Les raffinements lexicographiques sont particulièrement intéressants puisqu'ils permettent de bénéficier de l'arrière-plan de l'utilité espérée, tout en restant "qualitatifs". Cependant, ces raffinements ne sont définis que pour les problèmes de décision non séquentiels. Dans cette thèse, nous présentons des résultats sur l'extension des raffinements lexicographiques aux problèmes de décision séquentiels, en particulier aux Arbres de Décision et aux Processus Décisionnels de Markov possibilistes. Cela aboutit à des nouveaux algorithmes de planification plus "décisifs" que leurs contreparties possibilistes. Dans un premier temps, nous présentons des relations de préférence lexicographiques optimistes et pessimistes entre les politiques avec et sans utilités intermédiaires, qui raffinent respectivement les utilités possibilistes optimistes et pessimistes. Nous prouvons que les critères proposés satisfont le principe de l'efficacité de Pareto ainsi que la propriété de monotonie stricte. Cette dernière garantit la possibilité d'application d'un algorithme de programmation dynamique pour calculer des politiques optimales. Nous étudions tout d'abord l'optimisation lexicographique des politiques dans les Arbres de Décision possibilistes et les Processus Décisionnels de Markov à horizon fini. Nous fournissons des adaptations de l'algorithme de programmation dynamique qui calculent une politique optimale en temps polynomial. Ces algorithmes sont basés sur la comparaison lexicographique des matrices de trajectoires associées aux sous-politiques. Ce travail algorithmique est complété par une étude expérimentale qui montre la faisabilité et l'intérêt de l'approche proposée. Ensuite, nous prouvons que les critères lexicographiques bénéficient toujours d'une fondation en termes d'utilité espérée, et qu'ils peuvent être capturés par des utilités espérées infinitésimales. La dernière partie de notre travail est consacrée à l'optimisation des politiques dans les Processus Décisionnels de Markov (éventuellement infinis) stationnaires. Nous proposons un algorithme d'itération de la valeur pour le calcul des politiques optimales lexicographiques. De plus, nous étendons ces résultats au cas de l'horizon infini. La taille des matrices augmentant exponentiellement (ce qui est particulièrement problématique dans le cas de l'horizon infini), nous proposons un algorithme d'approximation qui se limite à la partie la plus intéressante de chaque matrice de trajectoires, à savoir les premières lignes et colonnes. Enfin, nous rapportons des résultats expérimentaux qui prouvent l'efficacité des algorithmes basés sur la troncation des matrices
This work contributes to possibilistic decision theory and more specifically to sequential decision-making under possibilistic uncertainty, at both the theoretical and practical levels. Even though appealing for its ability to handle qualitative decision problems, possibilisitic decision theory suffers from an important drawback: qualitative possibilistic utility criteria compare acts through min and max operators, which leads to a drowning effect. To overcome this lack of decision power, several refinements have been proposed in the literature. Lexicographic refinements are particularly appealing since they allow to benefit from the expected utility background, while remaining "qualitative". However, these refinements are defined for the non-sequential decision problems only. In this thesis, we present results on the extension of the lexicographic preference relations to sequential decision problems, in particular, to possibilistic Decision trees and Markov Decision Processes. This leads to new planning algorithms that are more "decisive" than their original possibilistic counterparts. We first present optimistic and pessimistic lexicographic preference relations between policies with and without intermediate utilities that refine the optimistic and pessimistic qualitative utilities respectively. We prove that these new proposed criteria satisfy the principle of Pareto efficiency as well as the property of strict monotonicity. This latter guarantees that dynamic programming algorithm can be used for calculating lexicographic optimal policies. Considering the problem of policy optimization in possibilistic decision trees and finite-horizon Markov decision processes, we provide adaptations of dynamic programming algorithm that calculate lexicographic optimal policy in polynomial time. These algorithms are based on the lexicographic comparison of the matrices of trajectories associated to the sub-policies. This algorithmic work is completed with an experimental study that shows the feasibility and the interest of the proposed approach. Then we prove that the lexicographic criteria still benefit from an Expected Utility grounding, and can be represented by infinitesimal expected utilities. The last part of our work is devoted to policy optimization in (possibly infinite) stationary Markov Decision Processes. We propose a value iteration algorithm for the computation of lexicographic optimal policies. We extend these results to the infinite-horizon case. Since the size of the matrices increases exponentially (which is especially problematic in the infinite-horizon case), we thus propose an approximation algorithm which keeps the most interesting part of each matrix of trajectories, namely the first lines and columns. Finally, we reports experimental results that show the effectiveness of the algorithms based on the cutting of the matrices
Styles APA, Harvard, Vancouver, ISO, etc.
11

Balbontin, Camila. « Integrating Decision Heuristics And Behavioural Refinements Into Travel Choice Models ». Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17892.

Texte intégral
Résumé :
Discrete choice modelling has become the preferred empirical context to study individuals’ preferences and willingness to pay. Although the outcome is important in decision making, so is the process that individuals adopt to assist them in reaching a decision. Both should be considered when analysing individual behaviour as they represent jointly the endogeneity of choice. Traditional choice studies assume, in the main, a linear in the parameters additive in the attributes (LPAA) approach, where individuals are rational, take into account all the attributes and alternatives presented to them when reaching a decision, and value the attribute levels exactly as were presented in the popular choice experiment paradigm. This has not always been shown to be a behaviourally valid representation of choice response, and there is a growing literature on the role of a number of alternative decision process strategies that individuals use when facing a decision, which are often referred to as heuristics, or simply as process rules. The majority of choice studies also assume that respondents have a risk attitude that is risk neutral (i.e., a risky alternative is indifferent to a sure alternative of equal expected value) and that they perceive the levels of attributes in choice experiments in a way that suggests the absence of perceptual conditioning. Considering each in turn, there are people who are risk adverse, risk taking or risk neutral, and this heterogeneity in risk attitude does influence individuals’ decisions when faced with different choice scenarios. Heterogeneity is also present for perceptual conditioning in cases where there is variability in the outcomes of an attribute(s), which allows for differences between the stated probability of occurrence (in a choice experiment) and the perceived probability used when evaluating the prospect. Finally, the (accumulated) experience that individuals’ have with each alternative might also influence their decisions. The objective of this research is to integrate multiple decision process strategies, Value Learning (VL) and Relative Advantage Maximisation (RAM) in particular, alongside the traditional LPAA ‘process rule’ with behavioural refinements (i.e., risk attitudes, perceptual conditioning and overt experience), to take into account process endogeneity in choice responses. A novel approach is used to include process heterogeneity, referred to as conditioning of random process heterogeneity, where the mean and standard deviation of the parameters normally defined under an LPAA heuristic are conditioned by process strategies. This approach takes into account the relationship between process heterogeneity and preference heterogeneity, of particular interest in studies that integrate random parameters and process strategies. The model performance results and willingness to pay estimates are compared to those obtained when using a probabilistic decision process method, increasingly used in the choice literature to accommodate process heterogeneity.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Burlacu, Robert [Verfasser], Alexander [Akademischer Betreuer] Martin, Alexander [Gutachter] Martin et Rüdiger [Gutachter] Schultz. « Adaptive Mixed-Integer Refinements for Solving Nonlinear Problems with Discrete Decisions / Robert Burlacu ; Gutachter : Alexander Martin, Rüdiger Schultz ; Betreuer : Alexander Martin ». Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020. http://d-nb.info/1205157530/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Khan, Omar Zia. « Policy Explanation and Model Refinement in Decision-Theoretic Planning ». Thesis, 2013. http://hdl.handle.net/10012/7808.

Texte intégral
Résumé :
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts. Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy. The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback. In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
Styles APA, Harvard, Vancouver, ISO, etc.
14

謝承凌. « A Study on the Decision Refinement after Losing Data based on Evidence Theory ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/87373693399516697098.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Lin, Wan-Ting, et 林婉婷. « The Refinement Mechanism of Preliminary Dispatch Fire-alarm Decision Support System for Fire Department of New Taipei City ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75954298327326757317.

Texte intégral
Résumé :
碩士
國立臺北大學
資訊管理研究所
100
Taiwan is cramped and crowded. And Taiwan’s development is so fast and residence is high-density. Particularly, New Taipei City is a big city with a vast territory, large population and geographical types of diversity, and its fire caseload for most of Taiwan. When the fire event happened, the dispatcher of the Fire Department 119 Dispatch Center must make a right decision in short time to decide the appropriate resources for fire event. A right decision can be safe, efficiency, appropriate rescue effect. Conversely may be the scale of the fire event make larger and need to more cost, time and resources to rescue and may be make more property damaged, people injured and dead. Therefore, New Taipei City created a project to practice an assistant system on fire-alarm dispatch in 2011. The objective of the project is use of information technology, to set appropriate fire-alarm dispatch modules to the new needs of the fire-alarm dispatch and relief. This research is about the research of the refinement mechanism of preliminary dispatch for fire-alarm decision support system for Fire Department of New Taipei City. We designed a model based on the factor of influence rescue and use the history cases to calculate the total dispatch engine group. It can make the system catch up the trends. We use decision tree model and refinement mechanisms to design the algorithm of the system, and implement the decision support module of the preliminary fire dispatch. The model can analyze, find the rules and trend of the dispatched resources. It also can make the suggestions of quantum of the dispatched resources to activate the system in the future and assist 119 dispatchers to dispatch fire engine groups of various fire units, to make more precise and fast scheduling and effective use.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Lien, Lee-Sheng, et 連李勝. « Fast Motion Estimation Based on Diamond Refinement Search and Mode Decision Algorithm for High Efficiency Video Coding Standard ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/uer9up.

Texte intégral
Résumé :
碩士
國立中興大學
電機工程學系所
106
With the advancement of science and technology, the demand for high-definition video is constantly increasing both in professional video, online video and daily consumer videos, and HD quality has been fully popularized. With the gradual maturation of 4K technology , will meet the 4K era of higher quality advent. HEVC can support a wide range of video formats from CIF (320 × 288) to HD (1920 × 1080), high resolution 4K (3840 × 2160) and up to 8K UHD (7680 × 4320). The encoding unit is increased from the size of (4 × 4) and (16 × 16) of H.264 to the maximum (64 × 64) to the minimum (8 × 8) of HEVC, and different size blocks are configured according to different requirements. HEVC mainly in three important units, the introduction of a larger coding unit CU (Coding Unit), the prediction unit (Prediction Unit) and the conversion unit (Transform Unit), HEVC compared with H.264 can save 50% of the compression In the video compression, in order to find the matching block with the smallest RD-Cost, the motion estimation often takes a relatively high amount of computation. Therefore, this paper presents a new HEVC algorithm for fast motion estimation and reduces the computational complexity improve compression efficiency. In this paper, a new fast search motion estimation algorithm is proposed contains changes to the three parts of Diamond Refinement Search, Mode Decision, and Inter Prediction Min Block Size. In the diamond refinement search, the motion vector IMV predicted by the AMVP is used as the starting point of the search, the origin zero vector is compared with the IMV, and the smaller RD-Cost is selected as the starting point of the first search, using the first 4 round diamond Search to obtain the minimum RD-Cost position. When the distance between the origin and the best point is not 0 or 1, the distance is determined to be greater than 4. If the distance is greater than 4, the Diamond Refinement Search solution is matched with the Concentric Diamond. Search refines the search; if the distance is less than or equal to 4, use the Small Diamond Search method for a quick search. In the mode selection, using statistics only 2N x 2N Mode, and the highest N x N Mode usage rate, can save the less used Mode, which can greatly accelerate the overall operation time , However, Inter Prediction Min Block Size reduction, the original 8 x 8 is changed to 4 x 4. The original block size can be cut into smaller blocks to improve the image quality after only using 2N x 2N Mode and N x N Mode. Experimental results This paper proposes a high-speed video coding standard fast motion estimation algorithm based on diamond refinement search and mode decision. After adding the Inter Prediction Min Block Size modification, the overall coding time can be saved by 46.15% with a 1.45% increase in Bit-Rate and a -0.03 drop in PSNR.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Scime, Anthony. « Taxonomic information retrieval (TAXIR) from the World Wide Web knowledge-based query and results refinement with user profiles and decision models / ». 1997. http://catalog.hathitrust.org/api/volumes/oclc/39258102.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Rathod, Harsh. « Surface and subsurface damage quantification using multi-device robotics-based sensor system and other non-destructive testing techniques ». Thesis, 2019. http://hdl.handle.net/1828/11168.

Texte intégral
Résumé :
North American civil infrastructures are aging. According to recent (2016) Canadian infrastructure report card, 33% of the Canadian municipal infrastructures are either in fair or below fair condition. The current deficit of replacing fair and poor municipal bridges (covers 26% of bridges) is 13 billion dollars. According to the latest report (2017) by American Society of Civil Engineers, the entire American infrastructure have been given a D+ condition rating. This includes some of the structural elements of infrastructures that pose a significant risk and there is an urgent need for frequent and effective inspection to ensure the safety of people. Visual inspection is a commonly used technique to detect and identify surface defects in bridge structures as it has been considered the most feasible method for decades. However, this currently used methodology is inadequate and unreliable as it is highly dependent on subjective human judgment. This labor-intensive approach for inspection requires huge investment in terms of an arrangement of temporary scaffoldings/permanent platforms, ladders, snooper trucks, and sometimes helicopters. To address these issues associated with visual inspection, the completed research suggests three innovative methods; 1) Combined use of Fuzzy logic and Image Processing Algorithm to quantify surface defects, 2) Unmanned Aerial Vehicle (UAV)-assisted American Association of State Highway and Transportation Officials (AASHTO) guideline-based damage assessment technique, and 3) Patent-pending multi-device robotics-based sensor data acquisition system for mapping and assessing defects in civil structures. To detect and quantify subsurface defects such as voids and delamination using a UAV system, another patent-pending UAV-based acoustic method is developed. It is a novel inspection apparatus that comprises of an acoustic signal generator coupled to a UAV. The acoustic signal generator includes a hammer to produce an acoustic signal in a structure using a UAV. An outcome of this innovative research is the development of a model to refine multiple commercially available NDT techniques’ data to detect and quantify subsurface defects. To achieve this, a total of nine 1800 mm × 460 mm reinforced concrete slabs with varying thicknesses of 100 mm, 150 mm and 200 mm are prepared. These slabs are designed to have artificially simulated defects like voids, debonding, honeycombing, and corrosion. To determine the performance of five NDT techniques, more than 300 data points are considered for each test. The experimental research shows that utilizing multiple techniques on a single structure to evaluate the defects, significantly lowers error and increases accuracy compared to that from a standalone test. To visualize the NDT data, two-dimensional NDT data maps are developed. This work presents an innovative method to interpret NDT data correctly as it compares the individual data points of slabs with no defects to slabs with simulated damage. For the refinement of NDT data, significance factor and logical sequential determination factor are proposed.
Graduate
2020-09-06
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie