Academic literature on the topic 'Decision refinement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Decision refinement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Decision refinement"

1

Hillner, Bruce E. "Decision-theoretic Refinement Planning." Medical Decision Making 16, no. 4 (October 1996): 419–20. http://dx.doi.org/10.1177/0272989x9601600414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sakurai, Shigeaki. "Refinement of fuzzy decision tree." IEEJ Transactions on Electronics, Information and Systems 117, no. 12 (1997): 1833–39. http://dx.doi.org/10.1541/ieejeiss1987.117.12_1833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haddawy, Peter, Anhai Doan, and Charles E. Kahn. "Decision-theoretic Refinement Planning in Medical Decision Making." Medical Decision Making 16, no. 4 (October 1996): 315–25. http://dx.doi.org/10.1177/0272989x9601600402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kline, Theresa J. B. "Refinement and Evaluation of the Decision-Making Questionnaire." Psychological Reports 78, no. 1 (February 1996): 151–62. http://dx.doi.org/10.2466/pr0.1996.78.1.151.

Full text
Abstract:
The psychometric properties of the Decision-making Questionnaire which was designed to assess decision-making in an organizational context were investigated by administering the questionnaire to 54 undergraduate students. The dimensions measured are Effectiveness, Confidence, and Information used in making both tactical and strategic organizational decisions. The assessment of the Effectiveness scores consisted of examining item-to-total correlations, principal components analyses, and internal consistencies. Also, the relationships amongst all three dimensions measured by the scale as well as relationships of all three dimensions with measures of perceived opportunities and threats are reported.
APA, Harvard, Vancouver, ISO, and other styles
5

Damnjanović, Kaja, Sandra Ilić, Irena Pavlović, and Vera Novković. "Refinement of outcome bias measurement in the parental decision-making context." Europe’s Journal of Psychology 15, no. 1 (February 28, 2019): 41–58. http://dx.doi.org/10.5964/ejop.v15i1.1698.

Full text
Abstract:
The aim of this study was twofold: one was to test the impact of the involvement on the parental outcome bias, and the second was to refine the measurement of outcome bias, normally reported as the difference between evaluations of a single decision, with different outcomes assigned to it. We introduced the evaluation of a decision without an outcome, to induce theoretically normative evaluation, unbiased by outcome, from which the evaluation shift could be calculated in either direction. To test this refinement in the parental decision-making context, we produced childcare dilemmas with varying levels of complexity, since the rise of complexity induces stronger bias. Complexity was determined by the particular combination of two factors: parental involvement in a decision - the amount of motivation, interest and drive evoked by it – and whether the decision was health-related or not. We presented parents with the decisions for evaluation, followed by a positive and a negative outcome, and without an outcome. The results confirm the interaction between involvement and domain on decision evaluation. Highly involving decisions yielded weaker outcome bias than low-involvement decisions in both health and non-health domain. Results also confirm the validity of the proposed way of measuring OB, revealing that in some situations positive outcomes skew evaluations more than negative outcomes. Also, a highly-involving dilemma followed by negative outcome did not produce significantly different evaluation compared to evaluation of a decision without outcome. Thus, adding a neutral position rendered OB measurement more precise and our involvement-related insights more nuanced.
APA, Harvard, Vancouver, ISO, and other styles
6

Stone, Thomas, Seung-Kyum Choi, and Hemanth Amarchinta. "Structural model refinement under uncertainty using decision-maker preferences." Journal of Engineering Design 24, no. 9 (September 2013): 640–61. http://dx.doi.org/10.1080/09544828.2013.824560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yerramareddy, Sudhakar, and Stephen C. Y. Lu. "Hierarchical and interactive decision refinement methodology for engineering design." Research in Engineering Design 4, no. 4 (December 1992): 227–39. http://dx.doi.org/10.1007/bf02032466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shekaramiz, Mohammad, Todd K. Moon, and Jacob H. Gunther. "Exploration vs. Data Refinement via Multiple Mobile Sensors." Entropy 21, no. 6 (June 5, 2019): 568. http://dx.doi.org/10.3390/e21060568.

Full text
Abstract:
We examine the deployment of multiple mobile sensors to explore an unknown region to map regions containing concentration of a physical quantity such as heat, electron density, and so on. The exploration trades off between two desiderata: to continue taking data in a region known to contain the quantity of interest with the intent of refining the measurements vs. taking data in unobserved areas to attempt to discover new regions where the quantity may exist. Making reasonable and practical decisions to simultaneously fulfill both goals of exploration and data refinement seem to be hard and contradictory. For this purpose, we propose a general framework that makes value-laden decisions for the trajectory of mobile sensors. The framework employs a Gaussian process regression model to predict the distribution of the physical quantity of interest at unseen locations. Then, the decision-making on the trajectories of sensors is performed using an epistemic utility controller. An example is provided to illustrate the merit and applicability of the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
9

Kaur, Iqbaldeep, and Rajesh Kumar Bawa. "Fuzzy based Schematic Component Selection Decision Search with OPAM-Ocaml Engine." Recent Patents on Computer Science 12, no. 3 (May 8, 2019): 224–32. http://dx.doi.org/10.2174/2213275912666181210104742.

Full text
Abstract:
Background: With an exponential increase in software online as well as offline, through each passing day, the task of digging out precise and relevant software components has become the need of the hour. There is no dearth of techniques used for the retrieval of software component from the available online and offline repositories in the conceptual as well as the empirical literature. However each of these techniques has its own set of limitations and suitability. Objective: The proposed technique gives concrete decision using schematic based search that gives better result and higher precision and recall values. Methods: In this paper, a component decision and retrieval engine called SR-SCRS (Schematic and Refinement based Software Component Retrieval System) has been presented using OPAM. OPAM is a github repository containing software components (packages), designed by OcamlPro. This search engine employs two retrieval techniques for a robust decision vis-o-vis Schematic-based search with fuzzy logic and Refinement-based search. The Schematic based search is based on matching the attribute values and the threshold of those values as given by the user. Thereafter the results are optimized to achieve the level of relevance using fuzzy logic. Refinement based search works on one particular attribute value. The experiments have been conducted and validated on OPAM dataset. Results: Precisely, the average precision of Schematic based search and Refinement based search is 60% and 27.86% which shows robust results. Conclusion: Hence, the performance and efficiency of the proposed work has been evaluated and compared with the other retrieval technique.
APA, Harvard, Vancouver, ISO, and other styles
10

Chadha, Rohit, and Mahesh Viswanathan. "A counterexample-guided abstraction-refinement framework for markov decision processes." ACM Transactions on Computational Logic 12, no. 1 (October 2010): 1–49. http://dx.doi.org/10.1145/1838552.1838553.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Decision refinement"

1

Aphale, Mukta S. "Intelligent agent support for policy authoring and refinement." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225826.

Full text
Abstract:
A policy (or norm) can be defined as a guideline stating what is allowed, what is forbidden and what is obligated for an entity, in a certain situation, so that an acceptable outcome is achieved. Policies occur in many types of scenarios, whether they are loose social networks of individuals or highly structured institutions. It is important, however, for policies to be consistent and to support the goals of organisations they govern. This requires a thorough understanding of the implications of introducing specific policies and how they interact. It is difficult, even for experts, to write consistent, unambiguous and accurate policies, and conflicts are practically unavoidable. At the same time conflicts may vary in significance. For example, some conflicts are most likely to occur, some conflicts may lead to high cost for goal achievement while some conflicts may lead to severe obstacles in the achievement of goals. Such conflicts are the most significant for the domain and goals of organisation. Resolution of conflicts that will clear the obstacles in the goal achievement and will maximize the benefits received must be prioritised. In order to resolve conflicts and refine policies; it is crucial to understand the implications of policies, conflicts and resolutions in terms of goal achievement and benefits to organisation. There exist huge number of policies and conflicts occurring within any organisation. Human decision makers are most likely to be cognitively overloaded. Making is difficult for them to decide which conflicts to prioritise in order to successfully achieve goals while maximizing benefits. Automated reasoning mechanisms can effectively support human decision makers in this process. In this thesis, we have addressed the problem of developing effective automated reasoning support for the detection and resolution of conflicts between plans (to achieve a given goal) and policies. We also present an empirical evaluation of a model of conflict detection and prioritisation through experiments with human users. Our empirical evaluations prove that providing guidance to users regarding what conflicts to prioritise and highlighting related conflicts lead to higher quality outcomes, thereby achieving goals successfully and rapidly.
APA, Harvard, Vancouver, ISO, and other styles
2

Ramachandran, Sowmya. "Theory refinement of Bayesian networks with hidden variables /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mazzarella, Fabio. "The Unlucky broker." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/365.

Full text
Abstract:
2010 - 2011
This dissertation collects results of the work on the interpretation, characteri- zation and quanti cation of a novel topic in the eld of detection theory -the Unlucky Broker problem-, and its asymptotic extension. The same problem can be also applied to the context of Wireless Sensor Networks (WSNs). Suppose that a WSN is engaged in a binary detection task. Each node of the system collects measurements about the state of the nature (H0 or H1) to be discovered. A common fusion center receives the observations from the sensors and implements an optimal test (for example in the Bayesian sense), exploiting its knowledge of the a-priori probabilities of the hypotheses. Later, the priors used in the test are revealed to be inaccurate and a rened pair is made available. Unfortunately, at that time, only a subset of the original data is still available, along with the original decision. In the thesis, we formulate the problem in statistical terms and we consider a system made of n sensors engaged in a binary detection task. A successive reduction of data set's cardinality occurs and multiple re nements are required. The sensors are devices programmed to take the decision from the previous node in the chain and the available data, implement some simple test to decide between the hypotheses, and forward the resulting decision to the next node. The rst part of the thesis shows that the optimal test is very di cult to be implemented even with only two nodes (the unlucky broker problem), because of the strong correlation between the available data and the decision coming from the previous node. Then, to make the designed detector implementable in practice and to ensure analytical tractability, we consider suboptimal local tests. We choose a simple local decision strategy, following the rationale ruling the optimal detector solving the unlucky broker problem: A decision in favor of H0 is always retained by the current node, while when the decision of the previous node is in favor of H1, a local log-likelihood based test is implemented. The main result is that, asymptotically, if we set the false alarm probability of the rst node (the one observing the full data set) the false alarm probability decreases along the chain and it is non zero at the last stage. Moreover, very surprisingly, the miss detection probability decays exponentially fast with the root square of the number of nodes and we provide its closed-form exponent, by exploiting tools from random processes and information theory. [edited by the author]
X n.s.
APA, Harvard, Vancouver, ISO, and other styles
4

Sarigul, Erol. "Interactive Machine Learning for Refinement and Analysis of Segmented CT/MRI Images." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/25954.

Full text
Abstract:
This dissertation concerns the development of an interactive machine learning method for refinement and analysis of segmented computed tomography (CT) images. This method uses higher-level domain-dependent knowledge to improve initial image segmentation results. A knowledge-based refinement and analysis system requires the formulation of domain knowledge. A serious problem faced by knowledge-based system designers is the knowledge acquisition bottleneck. Knowledge acquisition is very challenging and an active research topic in the field of machine learning and artificial intelligence. Commonly, a knowledge engineer needs to have a domain expert to formulate acquired knowledge for use in an expert system. That process is rather tedious and error-prone. The domain expert's verbal description can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. In many cases, the domain experts prefer to do actions instead of explaining their expertise. These problems motivate us to find another solution to make the knowledge acquisition process less challenging. Instead of trying to acquire expertise from a domain expert verbally, we can ask him/her to show expertise through actions that can be observed by the system. If the system can learn from those actions, this approach is called learning by demonstration. We have developed a system that can learn region refinement rules automatically. The system observes the steps taken as a human user interactively edits a processed image, and then infers rules from those actions. During the system's learn mode, the user views labeled images and makes refinements through the use of a keyboard and mouse. As the user manipulates the images, the system stores information related to those manual operations, and develops internal rules that can be used later for automatic postprocessing of other images. After one or more training sessions, the user places the system into its run mode. The system then accepts new images, and uses its rule set to apply postprocessing operations automatically in a manner that is modeled after those learned from the human user. At any time, the user can return to learn mode to introduce new training information, and this will be used by the system to updates its internal rule set. The system does not simply memorize a particular sequence of postprocessing steps during a training session, but instead generalizes from the image data and from the actions of the human user so that new CT images can be refined appropriately. Experimental results have shown that IntelliPost improves the segmentation accuracy of the overall system by applying postprocessing rules. In tests two different CT datasets of hardwood logs, the use of IntelliPost resulted in improvements of 1.92% and 9.45%, respectively. For two different medical datasets, the use of IntelliPost resulted in improvements of 4.22% and 0.33%, respectively.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Arrufat, Ondina. "The refinement and validation of the critical decision making and problem solving scale moral dilema (CDP-MD)." FIU Digital Commons, 1995. http://digitalcommons.fiu.edu/etd/1426.

Full text
Abstract:
This thesis extended previous research on critical decision making and problem solving by refining and validating a measure designed to assess the use of critical thinking and critical discussion in sociomoral dilemmas. The purpose of this thesis was twofold: 1) to refine the administration of the Critical Thinking Subscale of the CDP to elicit more adequate responses and for purposes of refining the coding and scoring procedures for the total measure, and 2) to collect preliminary data on the initial reliabilities of the measure. Subjects consisted of 40 undergraduate students at Florida International University. Results indicate that the use of longer probes on the Critical Thinking Subscale was more effective in eliciting adequate responses necessary for coding and evaluating the subjects performance. Analyses on the psychometric properties of the measure consisted of test-retest reliability and inter-rater reliability.
APA, Harvard, Vancouver, ISO, and other styles
6

Wolf, Lisa Adams. "Testing and refinement of an integrated, ethically-driven environmental model of clinical decision-making in emergency settings." Thesis, Boston College, 2011. http://hdl.handle.net/2345/2224.

Full text
Abstract:
Thesis advisor: Dorothy A. Jones
Thesis advisor: Pamela J. Grace
The purpose of the study was to explore the relationship between multiple variables within a model of critical thinking and moral reasoning that support and refine the elements that significantly correlate with accuracy and clinical decision-making. Background: Research to date has identified multiple factors that are integral to clinical decision-making. The interplay among suggested elements within the decision making process particular to the nurse, the patient, and the environment remain unknown. Determining the clinical usefulness and predictive capacity of an integrated ethically driven environmental model of decision making (IEDEM-CD) in emergency settings in facilitating accuracy in problem identification is critical to initial interventions and safe, cost effective, quality patient care outcomes. Extending the literature of accuracy and clinical decision making can inform utilization, determination of staffing ratios, and the development of evidence driven care models. Methodology: The study used a quantitative descriptive correlational design to examine the relationships between multiple variables within the IEDEM-CD model. A purposive sample of emergency nurses was recruited to participate in the study resulting in a sample size of 200, calculated to yield a power of 0.80, significance of .05, and a moderate effect size. The dependent variable, accuracy in clinical decision-making, was measured by scores on clinical vignettes. The independent variables of moral reasoning, perceived environment of care, age, gender, certification in emergency nursing, educational level, and years of experience in emergency nursing, were measures by the Defining Issues Test, version 2, the Revised Professional Practice Environment scale, and a demographic survey. These instruments were identified to test and refine the elements within the IEDEM-CD model. Data collection occurred via internet survey over a one month period. Rest's Defining Issues Test, version 2 (DIT-2), the Revised Professional Practice Environment tool (RPPE), clinical vignettes as well as a demographic survey were made available as an internet survey package using Qualtrics TM. Data from each participant was scored and entered into a PASW database. The analysis plan included bivariate correlation analysis using Pearson's product-moment correlation coefficients followed by chi square and multiple linear regression analysis. Findings: The elements as identified in the IEDEM-CD model supported moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. Findings reported that in complex clinical situations, higher levels of moral reasoning significantly affected accuracy in problem identification. Attributes of the environment of care including teamwork, communication about patients, and control over practice also significantly affected nurses' critical cue recognition and selection of appropriate interventions. Study results supported the conceptualization of the IEDEM-CD model and its usefulness as a framework for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy
Thesis (PhD) — Boston College, 2011
Submitted to: Boston College. Connell School of Nursing
Discipline: Nursing
APA, Harvard, Vancouver, ISO, and other styles
7

Raghavan, Venkatesh. "Supporting Multi-Criteria Decision Support Queries over Disparate Data Sources." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/120.

Full text
Abstract:
In the era of "big data revolution," marked by an exponential growth of information, extracting value from data enables analysts and businesses to address challenging problems such as drug discovery, fraud detection, and earthquake predictions. Multi-Criteria Decision Support (MCDS) queries are at the core of big-data analytics resulting in several classes of MCDS queries such as OLAP, Top-K, Pareto-optimal, and nearest neighbor queries. The intuitive nature of specifying multi-dimensional preferences has made Pareto-optimal queries, also known as skyline queries, popular. Existing skyline algorithms however do not address several crucial issues such as performing skyline evaluation over disparate sources, progressively generating skyline results, or robustly handling workload with multiple skyline over join queries. In this dissertation we thoroughly investigate topics in the area of skyline-aware query evaluation. In this dissertation, we first propose a novel execution framework called SKIN that treats skyline over joins as first class citizens during query processing. This is in contrast to existing techniques that treat skylines as an "add-on," loosely integrated with query processing by being placed on top of the query plan. SKIN is effective in exploiting the skyline characteristics of the tuples within individual data sources as well as across disparate sources. This enables SKIN to significantly reduce two primary costs, namely the cost of generating the join results and the cost of skyline comparisons to compute the final results. Second, we address the crucial business need to report results early; as soon as they are being generated so that users can formulate competitive decisions in near real-time. On top of SKIN, we built a progressive query evaluation framework ProgXe to transform the execution of queries involving skyline over joins to become non-blocking, i.e., to be progressively generating results early and often. By exploiting SKIN's principle of processing query at multiple levels of abstraction, ProgXe is able to: (1) extract the output dependencies in the output spaces by analyzing both the input and output space, and (2) exploit this knowledge of abstract-level relationships to guarantee correctness of early output. Third, real-world applications handle query workloads with diverse Quality of Service (QoS) requirements also referred to as contracts. Time sensitive queries, such as fraud detection, require results to progressively output with minimal delay, while ad-hoc and reporting queries can tolerate delay. In this dissertation, by building on the principles of ProgXe we propose the Contract-Aware Query Execution (CAQE) framework to support the open problem of contract driven multi-query processing. CAQE employs an adaptive execution strategy to continuously monitor the run-time satisfaction of queries and aggressively take corrective steps whenever the contracts are not being met. Lastly, to elucidate the portability of the core principle of this dissertation, the reasoning and query processing at different levels of data abstraction, we apply them to solve an orthogonal research question to auto-generate recommendation queries that facilitate users in exploring a complex database system. User queries are often too strict or too broad requiring a frustrating trial-and-error refinement process to meet the desired result cardinality while preserving original query semantics. Based on the principles of SKIN, we propose CAPRI to automatically generate refined queries that: (1) attain the desired cardinality and (2) minimize changes to the original query intentions. In our comprehensive experimental study of each part of this dissertation, we demonstrate the superiority of the proposed strategies over state-of-the-art techniques in both efficiency, as well as resource consumption.
APA, Harvard, Vancouver, ISO, and other styles
8

Darracott, Rosalyn M. "The development and refinement of the practice domain framework as a conceptual tool for understanding and guiding social care practice." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/86048/15/86048.pdf.

Full text
Abstract:
This study identified the common factors that influence social care practice across disciplines (such as social work and psychology), practice fields, and geographical contexts and further developed the Practice Domain Framework as an empirically-based conceptual framework to assist practitioners in understanding practice complexities. The framework has application in critical reflection, professional supervision, interdisciplinary understanding, teamwork, management, teaching and research. A mixed-methods design was used to identify the components and structure of the refined framework. Eighteen influential factors were identified and organised into eight domains: the Societal, Structural, Organisational, Practice Field, Professional Practice, Accountable Practice, Community of Place, and Personal.
APA, Harvard, Vancouver, ISO, and other styles
9

Molinari, David U. "A psychometric examination and refinement of the Canadian Forces Attrition Information Questionnaire, CFAIQ, comparing the reasons cited by anglophones and francophones in the Leave decision process." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq20843.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

El, Khalfi Zeineb. "Lexicographic refinements in possibilistic sequential decision-making models." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30269/document.

Full text
Abstract:
Ce travail contribue à la théorie de la décision possibiliste et plus précisément à la prise de décision séquentielle dans le cadre de la théorie des possibilités, à la fois au niveau théorique et pratique. Bien qu'attrayante pour sa capacité à résoudre les problèmes de décision qualitatifs, la théorie de la décision possibiliste souffre d'un inconvénient important : les critères d'utilité qualitatives possibilistes comparent les actions avec les opérateurs min et max, ce qui entraîne un effet de noyade. Pour surmonter ce manque de pouvoir décisionnel, plusieurs raffinements ont été proposés dans la littérature. Les raffinements lexicographiques sont particulièrement intéressants puisqu'ils permettent de bénéficier de l'arrière-plan de l'utilité espérée, tout en restant "qualitatifs". Cependant, ces raffinements ne sont définis que pour les problèmes de décision non séquentiels. Dans cette thèse, nous présentons des résultats sur l'extension des raffinements lexicographiques aux problèmes de décision séquentiels, en particulier aux Arbres de Décision et aux Processus Décisionnels de Markov possibilistes. Cela aboutit à des nouveaux algorithmes de planification plus "décisifs" que leurs contreparties possibilistes. Dans un premier temps, nous présentons des relations de préférence lexicographiques optimistes et pessimistes entre les politiques avec et sans utilités intermédiaires, qui raffinent respectivement les utilités possibilistes optimistes et pessimistes. Nous prouvons que les critères proposés satisfont le principe de l'efficacité de Pareto ainsi que la propriété de monotonie stricte. Cette dernière garantit la possibilité d'application d'un algorithme de programmation dynamique pour calculer des politiques optimales. Nous étudions tout d'abord l'optimisation lexicographique des politiques dans les Arbres de Décision possibilistes et les Processus Décisionnels de Markov à horizon fini. Nous fournissons des adaptations de l'algorithme de programmation dynamique qui calculent une politique optimale en temps polynomial. Ces algorithmes sont basés sur la comparaison lexicographique des matrices de trajectoires associées aux sous-politiques. Ce travail algorithmique est complété par une étude expérimentale qui montre la faisabilité et l'intérêt de l'approche proposée. Ensuite, nous prouvons que les critères lexicographiques bénéficient toujours d'une fondation en termes d'utilité espérée, et qu'ils peuvent être capturés par des utilités espérées infinitésimales. La dernière partie de notre travail est consacrée à l'optimisation des politiques dans les Processus Décisionnels de Markov (éventuellement infinis) stationnaires. Nous proposons un algorithme d'itération de la valeur pour le calcul des politiques optimales lexicographiques. De plus, nous étendons ces résultats au cas de l'horizon infini. La taille des matrices augmentant exponentiellement (ce qui est particulièrement problématique dans le cas de l'horizon infini), nous proposons un algorithme d'approximation qui se limite à la partie la plus intéressante de chaque matrice de trajectoires, à savoir les premières lignes et colonnes. Enfin, nous rapportons des résultats expérimentaux qui prouvent l'efficacité des algorithmes basés sur la troncation des matrices
This work contributes to possibilistic decision theory and more specifically to sequential decision-making under possibilistic uncertainty, at both the theoretical and practical levels. Even though appealing for its ability to handle qualitative decision problems, possibilisitic decision theory suffers from an important drawback: qualitative possibilistic utility criteria compare acts through min and max operators, which leads to a drowning effect. To overcome this lack of decision power, several refinements have been proposed in the literature. Lexicographic refinements are particularly appealing since they allow to benefit from the expected utility background, while remaining "qualitative". However, these refinements are defined for the non-sequential decision problems only. In this thesis, we present results on the extension of the lexicographic preference relations to sequential decision problems, in particular, to possibilistic Decision trees and Markov Decision Processes. This leads to new planning algorithms that are more "decisive" than their original possibilistic counterparts. We first present optimistic and pessimistic lexicographic preference relations between policies with and without intermediate utilities that refine the optimistic and pessimistic qualitative utilities respectively. We prove that these new proposed criteria satisfy the principle of Pareto efficiency as well as the property of strict monotonicity. This latter guarantees that dynamic programming algorithm can be used for calculating lexicographic optimal policies. Considering the problem of policy optimization in possibilistic decision trees and finite-horizon Markov decision processes, we provide adaptations of dynamic programming algorithm that calculate lexicographic optimal policy in polynomial time. These algorithms are based on the lexicographic comparison of the matrices of trajectories associated to the sub-policies. This algorithmic work is completed with an experimental study that shows the feasibility and the interest of the proposed approach. Then we prove that the lexicographic criteria still benefit from an Expected Utility grounding, and can be represented by infinitesimal expected utilities. The last part of our work is devoted to policy optimization in (possibly infinite) stationary Markov Decision Processes. We propose a value iteration algorithm for the computation of lexicographic optimal policies. We extend these results to the infinite-horizon case. Since the size of the matrices increases exponentially (which is especially problematic in the infinite-horizon case), we thus propose an approximation algorithm which keeps the most interesting part of each matrix of trajectories, namely the first lines and columns. Finally, we reports experimental results that show the effectiveness of the algorithms based on the cutting of the matrices
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Decision refinement"

1

author, Buell Ryan W., and Harvard Business School, eds. Decision making under information asymmetry: Experimental evidence on belief refinements. Boston]: Harvard Business School, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bonissone, Piero, and Kai Gobel. Information Refinement and Revision for Decision Making : Papers from the AAAI Spring Symposium: Modeling for Diagnostics, Prognostics, and Prediction. AAAI Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hallman, William K. What the Public Thinks and Knows About Science—and Why It Matters. Edited by Kathleen Hall Jamieson, Dan M. Kahan, and Dietram A. Scheufele. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190497620.013.6.

Full text
Abstract:
Modern conceptions of science literacy include knowledge of science facts; a grasp of scientific methods, norms, and practices; awareness of current discoveries and controversies involving science and refinement of the ability to comprehend and evaluate their implications; the capability to assess the priorities and actions of scientific institutions; and the capacity to engage in civic discourse and decision-making with regard to specific issues involving science. Advocates of increased science literacy maintain that widespread public understanding of science benefits individuals, culture, society, the economy, the nation, democracy, and science itself. This chapter argues that the relatively crude measures currently employed to assess science literacy are insufficient to demonstrate these outcomes. It is difficult to know whether these benefits are real and are independent of greater levels of education. Existing measures should be supplanted by multidimensional scales that are parsimonious, easy to administer, reliable, and valid over time and across cultures.
APA, Harvard, Vancouver, ISO, and other styles
4

Great Britain: Department for Transport. Government Response to the Design Refinement Consultation: Decisions and Safeguarding Directions for Northolt and Bromford. Stationery Office, The, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bicchieri, Cristina, and Giacomo Sillari. Game Theory. Edited by Paul Humphreys. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199368815.013.18.

Full text
Abstract:
Game theory aims to understand situations in which decision-makers interact strategically. Chess is an example, as are firms competing for business, politicians competing for votes, animals fighting over prey, bidders competing in auctions, threats and punishments in long-term relationships, and so on. In such situations, the outcome depends on what the parties do jointly. Decision-makers may be people, organizations, animals, or even genes. In this chapter, the authors review fundamental notions of game theory and their application to philosophy of science. In particular, Section 1 looks at games of complete information through normal and extensive form representations, introduce the notion of Nash equilibrium and its refinements. Section 2 touches on epistemic foundations and correlated equilibrium, and Section 3 examines repeated games and their importance for the analysis of altruism and cooperation. Section 4 deals with evolutionary game theory.
APA, Harvard, Vancouver, ISO, and other styles
6

Oulasvirta, Antti, and Andreas Karrenbauer. Combinatorial Optimization for User Interface Design. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198799603.003.0005.

Full text
Abstract:
Combinatorial optimization offers a rigorous but powerful approach to user interface design problems, defining problems mathematically such that they can be algorithmically solved. Design is defined as algorithmic combination of design decisions to obtain an optimal solution defined by an objective function. There are strong rationale for this method. First, core concepts such as ’design task’, ’design objective’, and ’optimal design’ become explicit and actionable. Second, solutions work well in practice, even for some problems traditionally out of reach of manual solutions. The method can assist in the generation, refinement, and adaptation of design. However, mathematical expression of HCI problems has been challenging and curbed applications. This chapter introduces combinatorial optimisation from user interface design point of view, and addresses two core challenges: 1) mathematical definition of design problems and 2) expression of evaluative knowledge such as design heuristics and predictive models of interaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Anwar, Ashraf M., and Folkert Jan ten Cate. Tricuspid and pulmonary valves. Oxford University Press, 2011. http://dx.doi.org/10.1093/med/9780199599639.003.0016.

Full text
Abstract:
Right-sided heart valves are complex anatomical structures. Studies describing the morphological and functional assessment of both valves are lacking. Most echocardiographic modalities provide a qualitative rather than quantitative approach.Echocardiography has a central role in the assessment of tricuspid regurgitation through estimation of severity, understanding the mechanism, assessment of pulmonary artery pressure, evaluation of right ventricular function, guidance towards surgery versus medical therapy, and assessment of valve competence after surgery.Transoesophageal echocardiography is an accurate method providing a qualitative assessment of right-sided heart valves. However, the lack of good validation makes it difficult to recommend its use for a quantitative approach. Hopefully, the future will provide refinements in instrumentation and techniques leading to increased accuracy in reporting and cost-effectiveness in making clinical decisions.
APA, Harvard, Vancouver, ISO, and other styles
8

Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.

Full text
Abstract:
The Rome Statute was designed to largely align criminal norms with actual state practice based on the realities of warfare. Article 8 embodied notable new refinements (e.g. in relation to disproportionate attack under Article 8(2)(b)(iv)), but did so against a backdrop of pragmatic military practice. This chapter dissects the structure of war crimes under Rome Statute to demonstrate this deliberate intention of Article 8 and then describes the correlative considerations related to charging practices for the maturing institution, including command responsibility. When properly understood and applied in light of the Elements of Crimes, the Court’s charging decisions with respect to war crimes ought to reflect the paradox that its operative provisions are at once revolutionary yet broadly reflective of the actual practice of warfare.
APA, Harvard, Vancouver, ISO, and other styles
9

Department of Defense. U. S. Army Attack Aviation in a Decisive Action Environment: History, Doctrine, and a Need for Doctrinal Refinement - Vietnam, Desert Storm, and Iraq War, Rotary Wing Attack, Technology and Sky Cavalry. Independently Published, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zagare, Frank C., and Branislav L. Slantchev. Game Theory and Other Modeling Approaches. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190846626.013.401.

Full text
Abstract:
Game theory is the science of interactive decision making. It has been used in the field of international relations (IR) for over 50 years. Almost all of the early applications of game theory in international relations drew upon the theory of zero-sum games, but the first generation of applications was also developed during the most intense period of the Cold War. The theoretical foundations for the second wave of the game theory literature in international relations were laid by a mathematician, John Nash, a co-recipient of the 1994 Nobel Prize in economics. His major achievement was to generalize the minimax solution which emerged from the first wave. The result is the now famous Nash equilibrium—the accepted measure of rational behavior in strategic form games. During the third wave, from roughly the early to mid-1980s to the mid-1990s, there was a distinct move away from static strategic form games toward dynamic games depicted in extensive form. The assumption of complete information also fell by the wayside; games of incomplete information became the norm. Technical refinements of Nash’s equilibrium concept both encouraged and facilitated these important developments. In the fourth and final wave, which can be dated, roughly, from around the middle of the 1990s, extensive form games of incomplete information appeared regularly in the strategic literature. The fourth wave is a period in which game theory was no longer considered a niche methodology, having finally emerged as a mainstream theoretical tool.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Decision refinement"

1

Yang, Zaifu. "Refinement and Stability of Stationary Points." In Theory and Decision Library, 147–70. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-4839-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Merkhofer, Miley W., and Lynn C. Maxwell. "Assessment, Refinement, and Narrowing of Options." In Tools to Aid Environmental Decision Making, 231–84. New York, NY: Springer New York, 1999. http://dx.doi.org/10.1007/978-1-4612-1418-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fijalkow, Nathanaël, Stefan Kiefer, and Mahsa Shirmohammadi. "Trace Refinement in Labelled Markov Decision Processes." In Lecture Notes in Computer Science, 303–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-49630-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dau, Hoang Nhat, Salem Chakhar, Djamila Ouelhadj, and Ahmed M. Abubahia. "Construction and Refinement of Preference Ordered Decision Classes." In Advances in Intelligent Systems and Computing, 248–61. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29933-0_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Drewes, Frank. "Selected Decision Problems for Square-Refinement Collage Grammars." In Algebraic Foundations in Computer Science, 1–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24897-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rizzo, Giuseppe, Nicola Fanizzi, Jens Lehmann, and Lorenz Bühmann. "Integrating New Refinement Operators in Terminological Decision Trees Learning." In Lecture Notes in Computer Science, 511–26. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49004-5_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kampa, Maria, George Kampas, Ilias Gkotsis, Youssef Bouali, Anabel Peiró Baquedano, and Rami Iguerwane. "Supporting Decision-Making Through Methodological Scenario Refinement: The PREVENT Project." In Security Informatics and Law Enforcement, 335–56. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69460-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Junges, Sebastian, and Matthijs T. J. Spaan. "Abstraction-Refinement for Hierarchical Probabilistic Models." In Computer Aided Verification, 102–23. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_6.

Full text
Abstract:
AbstractMarkov decision processes are a ubiquitous formalism for modelling systems with non-deterministic and probabilistic behavior. Verification of these models is subject to the famous state space explosion problem. We alleviate this problem by exploiting a hierarchical structure with repetitive parts. This structure not only occurs naturally in robotics, but also in probabilistic programs describing, e.g., network protocols. Such programs often repeatedly call a subroutine with similar behavior. In this paper, we focus on a local case, in which the subroutines have a limited effect on the overall system state. The key ideas to accelerate analysis of such programs are (1) to treat the behavior of the subroutine as uncertain and only remove this uncertainty by a detailed analysis if needed, and (2) to abstract similar subroutines into a parametric template, and then analyse this template. These two ideas are embedded into an abstraction-refinement loop that analyses hierarchical MDPs. A prototypical implementation shows the efficacy of the approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Varga, Igor, Eduard Bakstein, Greydon Gilmore, and Daniel Novak. "Image-Based Subthalamic Nucleus Segmentation for Deep Brain Surgery with Electrophysiology Aided Refinement." In Multimodal Learning for Clinical Decision Support and Clinical Image-Based Procedures, 34–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60946-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Haesaert, Sofie, Alessandro Abate, and Paul M. J. Van den Hof. "Verification of General Markov Decision Processes by Approximate Similarity Relations and Policy Refinement." In Quantitative Evaluation of Systems, 227–43. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43425-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Decision refinement"

1

Guo, Xu, and Zongyuan Yang. "Continuous simulation abstraction refinement for Markov decision processes." In 2017 4th International Conference on Systems and Informatics (ICSAI). IEEE, 2017. http://dx.doi.org/10.1109/icsai.2017.8248391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Hao, Fengfeng Tan, and Zhan Ma. "Improved fast intra mode decision with reliability refinement." In 2013 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP). IEEE, 2013. http://dx.doi.org/10.1109/chinasip.2013.6625389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Javidi, Tara. "Information acquisition and sequential belief refinement." In 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 2016. http://dx.doi.org/10.1109/cdc.2016.7799449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xiaohong Hu, Xu Qian, Lei Xi, and Xinming Ma. "Robust image annotation refinement via graph-based learning." In 2009 Chinese Control and Decision Conference (CCDC). IEEE, 2009. http://dx.doi.org/10.1109/ccdc.2009.5192005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Bin, and XinCheng Tan. "Linear Weighted Median Filtering for Stereo Disparity Refinement." In 2020 Chinese Control And Decision Conference (CCDC). IEEE, 2020. http://dx.doi.org/10.1109/ccdc49329.2020.9164279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Y. H., Z. G. Li, and S. Rahardja. "Fast mode decision in fine granularity scalability motion refinement." In Optics East 2007, edited by Susanto Rahardja, JongWon Kim, and Jiebo Luo. SPIE, 2007. http://dx.doi.org/10.1117/12.733515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reissig, Gunther, and Matthias Rungger. "Feedback refinement relations for symbolic controller synthesis." In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gao, Longfei, Shengfu Dong, Wenmin Wang, Ronggang Wang, and Wen Gao. "Fast intra mode decision algorithm based on refinement in HEVC." In 2015 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2015. http://dx.doi.org/10.1109/iscas.2015.7168684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

K. Areklett, E., A. Sami, N. Milton, and J. Sandal. "The Decision to drill - Prospect risk refinement through technical integration." In 58th EAEG Meeting. Netherlands: EAGE Publications BV, 1996. http://dx.doi.org/10.3997/2214-4609.201408948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tanaka, S., Z. Wang, J. He, and X. H. Wen. "Decision Making Under Subsurface Uncertainty via Sequential Uncertainty Refinement Method." In EAGE/TNO Workshop on OLYMPUS Field Development Optimization. Netherlands: EAGE Publications BV, 2018. http://dx.doi.org/10.3997/2214-4609.201802293.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Decision refinement"

1

O'Neill, H. B., S. A. Wolfe, and C. Duchesne. Ground ice map of Canada. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330294.

Full text
Abstract:
This Open File presents national-scale mapping of ground ice conditions in Canada. The mapping depicts a first-order estimate of the combined volumetric percentage of excess ice in the top 5 m of permafrost from segregated, wedge, and relict ice. The estimates for the three ice types are based on modelling by O'Neill et al. (2019) (https://doi.org/10.5194/tc-13-753-2019), and informed by available published values of ground ice content and expert knowledge. The mapping offers an improved depiction of ground ice in Canada at a broad scale, incorporating current knowledge on the associations between geological and environmental conditions and ground ice type and abundance. It provides a foundation for hypothesis testing related to broad-scale controls on ground ice formation, preservation, and melt. Additional compilation of quantitative field data on ground ice and improvements to national-scale surficial geology mapping will allow further assessment and refinement of the representation of ground ice in Canada. Continued research will focus on improving the lateral and vertical representation of ground ice required for incorporation into Earth system models and decision-making. Spatial data files of the mapping are available as downloads with this Open File.
APA, Harvard, Vancouver, ISO, and other styles
2

Saldanha, Ian J., Andrea C. Skelly, Kelly Vander Ley, Zhen Wang, Elise Berliner, Eric B. Bass, Beth Devine, et al. Inclusion of Nonrandomized Studies of Interventions in Systematic Reviews of Intervention Effectiveness: An Update. Agency for Healthcare Research and Quality (AHRQ), September 2022. http://dx.doi.org/10.23970/ahrqepcmethodsguidenrsi.

Full text
Abstract:
Introduction: Nonrandomized studies of interventions (NRSIs) are observational or experimental studies of the effectiveness and/or harms of interventions, in which participants are not randomized to intervention groups. There is increasingly widespread recognition that advancements in the design and analysis of NRSIs allow NRSI evidence to have a much more prominent role in decision making, and not just as ancillary evidence to randomized controlled trials (RCTs). Objective: To guide decisions about inclusion of NRSIs for addressing the effects of interventions in systematic reviews (SRs), this chapter updates the 2010 guidance on inclusion of NRSIs in Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) SRs. The chapter focuses on considerations for decisions to include or exclude NRSIs in SRs. Methods: In November 2020, AHRQ convened a 20-member workgroup that comprised 13 members representing 8 of 9 AHRQ-appointed EPCs, 3 AHRQ representatives, 1 independent consultant with expertise in SRs, and 3 representatives of the AHRQ-appointed Scientific Resource Center. The workgroup received input from the full EPC Program regarding the process and specific issues through discussions at a virtual meeting and two online surveys regarding challenges with NRSI inclusion in SRs. One survey focused on current practices by EPCs regarding NRSI inclusion in ongoing and recently completed SRs. The other survey focused on the appropriateness, completeness, and usefulness of existing EPC Program methods guidance. The workgroup considered the virtual meeting and survey input when identifying aspects of the guidance that needed updating. The workgroup used an informal method for generating consensus about guidance. Disagreements were resolved through discussion. Results: We outline considerations for the inclusion of NRSIs in SRs of intervention effectiveness. We describe the strengths and limitations of RCTs, study design features and types of NRSIs, and key considerations for making decisions about inclusion of NRSIs (during the stages of topic scoping and refinement, SR team formation, protocol development, SR conduct, and SR reporting). We discuss how NRSIs may be applicable for the decisional dilemma being addressed in the SR, threats to the internal validity of NRSIs, as well as various data sources and advanced analytic methods that may be used in NRSIs. Finally, we outline an approach to incorporating NRSIs within an SR and key considerations for reporting. Conclusion: The main change from the previous guidance is the overall approach to decisions about inclusion of NRSIs in EPC SRs. Instead of recommending NRSI inclusion only if RCTs are insufficient to address the Key Question, this updated guidance handles NRSI evidence as a valuable source of information and lays out important considerations for decisions about the inclusion of NRSIs in SRs of intervention effectiveness. Different topics may require different decisions regarding NRSI inclusion. This guidance is intended to improve the utility of the final product to end-users. Inclusion of NRSIs will increase the scope, time, and resources needed to complete SRs, and NRSIs pose potential threats to validity, such as selection bias, confounding, and misclassification of interventions. Careful consideration must be given to both concerns.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindsay, Douglas T. US Army Attack Aviation in a Decisive Action Environment: History, Doctrine, and a Need for Doctrinal Refinement. Fort Belvoir, VA: Defense Technical Information Center, April 2015. http://dx.doi.org/10.21236/ad1001526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography