Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Imprecisione.

Thèses sur le sujet « Imprecisione »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 35 meilleures thèses pour votre recherche sur le sujet « Imprecisione ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

CAMPAGNER, ANDREA. « Robust Learning Methods for Imprecise Data and Cautious Inference ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404829.

Texte intégral
Résumé :
La rappresentazione, quantificazione e gestione dell'incertezza è uno dei problemi centrali nell'Intelligenza Artificiale, ed in particolare nel Machine Learning, in cui l'incertezza è intrinsecamente collegata alla natura induttiva dell'apprendimento. Tra diverse forme d'incertezza, la modellazione dell'imprecisione, cioè il problem di gestire dati o conoscenza imperfetta o incompleta, ha recentemente attratto molto interesse nella comunità di ricerca, per via delle sue implicazione teoriche e applicate sull'uso di strumenti basati sul Machine Learning. Questo lavoro si concentra sul problema di gestire l'imprecision nel Machine Learning, sotto due diverse prospettive. Da un lato, l'imprecisione che riguarda i dati di input alla pipeline di Machine Learning, da cui si origina il problema dell'apprendimento da dati imprecisi. Dall'altro, l'imprecisione come strumento per implementare processi di quantificazione dell'incertezza nel Machine Learning, al fine di permettere a questi ultimi di fornire previsioni set-valued e portare quindi alla definizione di metodi di inferenza cauta. Lo scopo di questo lavoro, quindi, riguarda lo studio teorico ed empirico dei due scenari summenzionati. Per quanto riguarda il problema dell'apprendimento da dati imprecisi, il focus principale riguarda l'investigazione del problema dell'apprendimento da fuzzy label, sia da un punto di visto teorico che algoritmo. I contributi principali includono: la proposta di una caratterizzazione teorica del problema; la proposta di un nuovo algoritmo di ensemble, basato su pseudo-label, e il suo studio dal punto di visto teorico ed empirico; l'applicazione del summenzionato algoritmo in tre problemi medici reali; ed infine la proposta e lo studio di algoritmi di feature selection per ridurre la complessità computazionale e limitare la "curse of dimensionality" per algoritmi di apprendimento da fuzzy label. Per quanto riguarda l'inferenza cauta, il focus principale riguarda lo studio teorico di tre framework per l'inferenza cauta e lo sviluppo di nuovi algoritmi ed approcci per estendere l'applicabilità di tali framework in setting complessi. I contributi principali in questo senso riguardo lo studio delle proprietà teoriche di, e le relazioni tra, metodi di inferenza cauta decision-teorici, basati sulla selective prediction e sulla conformal prediction; lo studio di modelli ensemble di inferenza cauta, sia da un punto di vista empirico che teorico, mostrando in particolare che tali ensemble permettono di migliorare la robustezza e la generalizzazione di algoritmi di Machine Learning, nonché di facilitare l'applicazione di metodi d'inferenza cauta a dati complessi, multi-sorgenti o multi-modali
The representation, quantification and proper management of uncertainty is one of the central problems in Artificial Intelligence, and particularly so in Machine Learning, in which uncertainty is intrinsically tied to the inductive nature of the learning problem. Among different forms of uncertainty, the modeling of imprecision, that is the problem of dealing with data or knowledge that are imperfect} and incomplete, has recently attracted interest in the research community, for its theoretical and application-oriented implications on the practice and use of Machine Learning-based tools and methods. This work focuses on the problem of dealing with imprecision in Machine Learning, from two different perspectives. On the one hand, when imprecision affects the input data to a Machine Learning pipeline, leading to the problem of learning from imprecise data. On the other hand, when imprecision is used a way to implement uncertainty quantification for Machine Learning methods, by allowing these latter to provide set-valued predictions, leading to so-called cautious inference methods. The aim of this work, then, will be to investigate theoretical as well as empirical issues related to the two above mentioned settings. Within the context of learning from imprecise data, focus will be given on the investigation of the learning from fuzzy labels setting, both from a learning-theoretical and algorithmic point of view. Main contributions in this sense include: a learning-theoretical characterization of the hardness of learning from fuzzy labels problem; the proposal of a novel, pseudo labels-based, ensemble learning algorithm along with its theoretical study and empirical analysis, by which it is shown to provide promising results in comparison with the state-of-the-art; the application of this latter algorithm in three relevant real-world medical problems, in which imprecision occurs, respectively, due to the presence of conflicting expert opinions, the use of vague technical vocabulary, and the presence of individual variability in biochemical parameters; as well as the proposal of feature selection algorithms that may help in reducing the computational complexity of this task or limit the curse of dimensionality. Within the context of cautious inference, focus will be given to the theoretical study of three popular cautious inference frameworks, as well as to the development of novel algorithms and approaches to further the application of cautious inference in relevant settings. Main contributions in this sense include the study of the theoretical properties of, and relationships among, decision-theoretic, selective prediction and conformal prediction methods; the proposal of novel cautious inference techniques drawing from the interaction between decision-theoretic and conformal predictions methods, and their evaluation in medical settings; as well as the study of ensemble of cautious inference models, both from an empirical point of view, as well as from a theoretical one, by which it is shown that such ensembles could be useful to improve robustness, generalization, as well as to facilitate application of cautious inference methods on multi-source and multi-modal data.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Schoenfield, Miriam. « Imprecision in normative domains ». Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72922.

Texte intégral
Résumé :
Thesis (Ph. D. in Philosophy)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Being rational and being moral can be difficult. However, some theories of rationality and morality make living up to these ideals too difficult by imposing requirements which are excessively rigid. In this dissertation, I defend and explore the implications of relaxing some of these requirements. I first consider the implications of thinking that rational agents' doxastic attitudes can be represented by imprecise, rather than precise probabilities. In defending this position, I develop a distinction between an idealized, and less idealized notion of rationality. I then explore the moral implications of the thought that facts about value cannot be represented by a precise value function. Finally, I defend permissivism, the view that sometimes there is more than one doxastic attitude that it is rationally permissible to adopt given a particular body of evidence, and show that this view has some interesting implications for questions about higher order evidence.
by Miriam Schoenfield.
Ph.D.in Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
3

Nguyen, Vu-Linh. « Imprecision in machine learning problems ». Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2433.

Texte intégral
Résumé :
Nous nous sommes concentrés sur la modélisation et l'imprécision dans les problèmes d'apprentissage automatique, où les données ou connaissances disponibles souffrent d'imperfections importantes. Dans ce travail, les données imparfaites font référence à des situations où certaines caractéristiques ou les étiquettes sont imparfaitement connues, c'est-à-dire peuvent être spécifiées par des ensembles de valeurs possibles plutôt que par des valeurs précises. Les apprentissages à partir de données partielles sont couramment rencontrés dans divers domaines, tels que la biostatistique, l'agronomie ou l'économie. Ces données peuvent être générées par des mesures grossières ou censurées, ou peuvent être obtenues à partir d'avis d'experts. D'autre part, la connaissance imparfaite fait référence aux situations où les données sont spécifiées avec précision, cependant, il existe des classes qui ne peuvent pas être distinguées en raison d'un manque de connaissances (également appelée incertitude épistémique) ou en raison d'une forte incertitude (également appelée incertitude aléatoire). Considérant le problème de l'apprentissage à partir de données partiellement spécifiées, nous soulignons les problèmes potentiels liés au traitement de plusieurs classes optimales et de plusieurs modèles optimaux dans l'étape d'inférence et d'apprentissage, respectivement. Nous avons proposé des approches d'apprentissage actif pour réduire l'imprécision dans ces situations. Pourtant, la distinction incertitude épistémique/aléatoire a été bien étudiée dans la littérature. Pour faciliter les applications ultérieures d'apprentissage automatique, nous avons développé des procédures pratiques pour estimer ces degrés pour les classificateurs populaires. En particulier, nous avons exploré l'utilisation de cette distinction dans les contextes d'apprentissage actif et prudent
We have focused on imprecision modeling in machine learning problems, where available data or knowledge suffers from important imperfections. In this work, imperfect data refers to situations where either some features or the labels are imperfectly known, that is can be specified by sets of possible values rather than precise ones. Learning from partial data are commonly encountered in various fields, such as bio-statistics, agronomy, or economy. These data can be generated by coarse or censored measurements, or can be obtained from expert opinions. On the other hand, imperfect knowledge refers to the situations where data are precisely specified, however, there are classes, that cannot be distinguished due to a lack of knowledge (also known as epistemic uncertainty) or due to a high uncertainty (also known as aleatoric uncertainty). Considering the problem of learning from partially specified data, we highlight the potential issues of dealing with multiple optimal classes and multiple optimalmodels in the inference and learning step, respectively. We have proposed active learning approaches to reduce the imprecision in these situations. Yet, the distinction epistemic/aleatoric uncertainty has been well-studied in the literature. To facilitate subsequent machine learning applications, we have developed practical procedures to estimate these degrees for popular classifiers. In particular, we have explored the use of this distinction in the contexts of active learning and cautious inferences
Styles APA, Harvard, Vancouver, ISO, etc.
4

Naji, Zeyad Tarik. « Correcting for data imprecision in MRP2 systems ». Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280967.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Portman, Martin. « Imprecision in real-time systems : theory and practice ». Thesis, University of York, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282288.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Edwards, Peter J. « Analogue imprecision in MLPs implications and learning improvements ». Thesis, University of Edinburgh, 1994. http://hdl.handle.net/1842/13772.

Texte intégral
Résumé :
Analogue hardware implementations of Multi-Layer Perceptrons (MLP) have a limited precision that has a detrimental effect on the result of synaptic multiplication. At the same time however the accuracy of the circuits can be very high with good design. This thesis investigates the consequences of the imprecision on the performance of the MLP, examining whether it is accuracy or precision that is of importance in neural computation. The results of this thesis demonstrate that far from having a detrimental effect, the imprecision or synaptic weight noise enhances the performance of the solution. In particular the fault tolerance and generalisation ability are improved. In addition, under certain conditions, the learning trajectory of the training network is also improved. Through a mathematical analysis and subsequent verification experiments the enhancements are reported. Simulation experiments examine the underlying mechanisms and probe the limitations of the technique as an enhancement scheme. For a variety of problems, precision is shown to be significantly less important than accuracy. In fact imprecision can have beneficial effects on learning performance.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Haywood, S. M. « Estimating and visualising imprecision in radiological emergency response assessments ». Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/6156.

Texte intégral
Résumé :
After an accidental release of radioactivity to atmosphere, modelling assessments are needed to predict what the contamination levels are likely to be and what measures need to be taken to protect human health. These predictions will be imprecise due to lack of knowledge about the nature of the release and the weather, and also due to measurement inaccuracy. This thesis describes work to investigate this imprecision and to find better ways of including it in assessments and representing it in results. It starts by reviewing exposure pathways and the basic dose calculations in an emergency response assessment. The possible variability of key parameters in emergency dose calculations is considered, and ranges are developed for each. The imprecision typically associated with calculational endpoints is explored through a sensitivity study. This has been done using both a simple Gaussian atmospheric dispersion model and also real-time weather data in combination with a complex atmospheric dispersion model. The key parameters influencing assessment imprecision are identified. These are demonstrated to be factors relating to the release, arising from inevitable lack of knowledge in the early stages of an accident, and factors relating to meteorology and dispersion. An alternative improved approach to emergency response assessments is then outlined, which retains a simple and transparent assessment capability but which also indicates the imprecision associated with the results through incomplete knowledge. This tool uses input from real-time atmospheric dispersion and weather prediction tools. A prototype version of the tool has been created and this has been used to produce example results. The final stage of the thesis describes the use of the new tool to develop ways in which imprecise or uncertain information can be presented to decision makers. Alternative presentational techniques are demonstrated using example results.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Straszecka, Ewa. « Measures of uncertainty and imprecision in medical diagnosis support ». Praca habilitacyjna, Wydawnictwo Politechniki Śląskiej, 2010. https://delibra.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=997.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Eguiguren, Praeli Francisco José. « El actual estado de emergencia : Justificación, alcances, imprecisiones y riesgos ». Foro Jurídico, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/119505.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Crossman, Richard John. « Limiting conditional distributions : imprecision and relation to the hazard rate ». Thesis, Durham University, 2009. http://etheses.dur.ac.uk/14/.

Texte intégral
Résumé :
Many Markov chains with a single absorbing state have a unique limiting conditional distribution (LCD) to which they converge, conditioned on non-absorption, regardless of the initial distribution. If this limiting conditional distribution is used as the initial distribution over the non-absorbing states, then the probability distribution of the process at time n, conditioned on non-absorption, is equal for all values of n>0. Such an initial distribution is known as the quasi-stationary distribution (QSD). Thus the LCD and QSD are equal. These distributions can be found in both the discrete-time and continuous-time case. In this thesis we consider finite Markov chains which have one absorbing state, and for which all other states form a set which is a single communicating class. In addition, every state is aperiodic. These conditions ensure the existence of a unique LCD. We first consider continuous Markov chains in the context of survival analysis. We consider the hazard rate, a function which measures the risk of instantaneous failure of a system at time t conditioned on the system not having failed before t. It is well-known that the QSD leads to a constant hazard rate, and that the hazard rate generated by any other initial distribution tends to that constant rate. Claims have been made by Aalen and by Aalen and Gjessing that it may be possible to predict the shape of hazard rates generated by phase type distributions (first passage time distributions generated by atomic initial distributions) by comparing these initial distributions with the QSD. In Chapter 2 we consider these claims, and demonstrate through the use of several examples that the behaviour considered by those conjectures is more complex then previously believed. In Chapters 3 and 4 we consider discrete Markov chains in the context of imprecise probability. In many situations it may be unrealistic to assume that the transition matrix of a Markov chain can be determined exactly. It may be more plausible to determine upper and lower bounds upon each element, or even determine closed sets of probability distributions to which the rows of the matrix may belong. Such methods have been discussed by Kozine and Utkin and by Skulj, and in each of these papers results were given regarding the long-term behaviour of such processes. None of these papers considered Markov chains with an absorbing state. In Chapter 3 we demonstrate that, under the assumption that the transition matrix cannot change from time step to time step, there exists an imprecise generalisation to both the LCD and the QSD, and that these two generalisations are equal. In Chapter 4, we prove that this result holds even when we no longer assume that the transition matrix cannot change from time step to time step. In each chapter, examples are presented demonstrating the convergence of such processes, and Chapter 4 includes a comparison between the two methods.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Frutero, Moreno Catalina. « La flagrancia : Imprecisiones de su aplicabilidad en la legislación procesal mexicana ». Tesis de maestría, Universidad Autónoma del Estado de México, 2017. http://hdl.handle.net/20.500.11799/99038.

Texte intégral
Résumé :
Al hablar de una responsabilidad en el sistema jurídico penal podemos entender como aquella situación en la cual una persona que ha fallado a su deber que legalmente impone una norma jurídica. La obligación que tiene dicho sujeto es considerada de primer momento como una obligación preexistente es decir una obligación legal pues se trata de una regla de conducta que el legislador ha fijado y que a su vez obliga a hacer o dejar de hacer.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Rogova, Ermir. « Treatment of imprecision in data repositories with the aid of KNOLAP ». Thesis, University of Westminster, 2010. https://westminsterresearch.westminster.ac.uk/item/907q7/treatment-of-imprecision-in-data-repositories-with-the-aid-of-knolap.

Texte intégral
Résumé :
Traditional data repositories introduced for the needs of business processing, typically focus on the storage and querying of crisp domains of data. As a result, current commercial data repositories have no facilities for either storing or querying imprecise/approximate data. No significant attempt has been made for a generic and applicationindependent representation of value imprecision mainly as a property of axes of analysis and also as part of dynamic environment, where potential users may wish to define their “own” axes of analysis for querying either precise or imprecise facts. In such cases, measured values and facts are characterised by descriptive values drawn from a number of dimensions, whereas values of a dimension are organised as hierarchical levels. A solution named H-IFS is presented that allows the representation of flexible hierarchies as part of the dimension structures. An extended multidimensional model named IF-Cube is put forward, which allows the representation of imprecision in facts and dimensions and answering of queries based on imprecise hierarchical preferences. Based on the H-IFS and IF-Cube concepts, a post relational OLAP environment is delivered, the implementation of which is DBMS independent and its performance solely dependent on the underlying DBMS engine.
Styles APA, Harvard, Vancouver, ISO, etc.
13

PIRUS, DENISE Gardan Yvon. « IMPRECISIONS NUMERIQUES : METHODE D'ESTIMATION ET DE CONTROLE DE LA PRECISION EN C.A.O / ». [S.l.] : [s.n.], 1997. ftp://ftp.scd.univ-metz.fr/pub/Theses/1997/Pirus.Denise.SMZ9703.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Appel, Jacob M. « Toward a model rule Statutory imprecision and surrogate decision-making for pregnant women ». Thesis, Icahn School of Medicine at Mount Sinai, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1535747.

Texte intégral
Résumé :

This paper seeks to investigate how concerns regarding pregnant women have been resolved by state legislatures when drafting surrogacy and advance directive statues. It also examines two related questions: Have narrow concerns regarding a relatively rare phenomenon had a significant and potentially detrimental impact on overall state policy regarding end-of-life decision making? And what lessons can be drawn from these experiences for understanding future policy battles at the nexus of bioethics and public health?

Styles APA, Harvard, Vancouver, ISO, etc.
15

Tocts, Ashley M. S. « The Role of Adaptive Imprecision in Evolvability| A Survey of the Literature and Wild Populations ». Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10749890.

Texte intégral
Résumé :

Natural selection, the driving force behind evolution, acts on individual phenotypes. Phenotypes are the result of an individual’s genotype, but the development from genotype to phenotype is not always accurate and precise. Developmental instability (DI: random perturbations in the microenvironment during development) can result in a phenotype that misses its genetic target. In the current study I assert that developmental instability may itself be an evolvable trait. Here I present evidence for DI’s heritability, selectability, and phenotypic variation in the form of empirical data and evidence from the literature from the years 2006 through 2016. Phenotypic variation contributed by DI was estimated using fluctuating asymmetry and was found to contribute up to 60% of the phenotypic variation in certain trait types. I suggest that selection against developmental instability in some traits may result in higher evolvabilities (i.e., rates of evolution) for those traits or for entire taxonomic groups.

Styles APA, Harvard, Vancouver, ISO, etc.
16

Roque, Roca Naffis Rubén. « Estudio lingüístico de la imprecisión léxica y de la redundancia en los diarios formales de Lima ». Master's thesis, Universidad Nacional Mayor de San Marcos, 2016. https://hdl.handle.net/20.500.12672/5288.

Texte intégral
Résumé :
Caracteriza y explica lingüísticamente los casos de imprecisión léxica y redundancia en los diarios formales de Lima. Selecciona, para tal fin, un corpus de tres diarios formales de la capital: El Comercio, Perú21 y La República. Luego, realiza el análisis y la explicación lingüística de los casos más frecuentes. Después, determina las expresiones adecuadas que sustituyen a los enunciados en los que hay redundancia e imprecisión léxica. Finalmente, presenta en el anexo los datos en su contexto original periodístico.
Tesis
Styles APA, Harvard, Vancouver, ISO, etc.
17

Friedman, Oxenstein Jackelyn. « Imprecisiones de índole nutricional en el contenido de informaciones relativas a la nutrición publicadas en los diarios de mayor lectoría de Lima Metropolitana ». Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2013. http://hdl.handle.net/10757/314946.

Texte intégral
Résumé :
Introducción: Los medios de comunicación que comparten temas de salud, se han convertido en intermediarios entre las organizaciones sanitarias y los ciudadanos interesados por la salud y el bienestar.(1) La manera cómo se comunican temas científico - nutricionales puede tener importantes efectos en el entendimiento del público, en sus actitudes y comportamientos y, por lo tanto, en su bienestar.(2) Si bien el público en general muestra una atención creciente por temas de salud y nutrición difundidos por los medios de comunicación, reconoce también sentirse mal informado al respecto.(3) Objetivo: Determinar la presencia de imprecisiones de índole nutricional en el contenido de informaciones relativas a la nutrición publicadas en los ocho diarios de mayor lectoría de Lima Metropolitana. Materiales y métodos: Estudio de tipo no experimental, transversal, descriptivo. Método de investigación cualitativo. Se recolectó, entre el 17 de setiembre de 2012 y el 17 de octubre de 2012, 31 ejemplares de cada uno de los diarios seleccionados: Trome, Ojo, El Comercio, Perú 21, Depor, Correo, El Popular y Ajá (248 diarios en total). Resultados: Se hallaron 144 informaciones relativas a la nutrición en dichos ejemplares, de las cuales 109 presentaron imprecisiones de índole nutricional en el contenido, cantidad equivalente al 75,7%. Los diarios populares (Trome, Ojo, El Popular y Ajá) fueron los responsables de publicar 87 de las 109 informaciones imprecisas, mientras que las 22 restantes formaron parte de los diarios serios (El Comercio, Perú 21 y Correo). El diario deportivo (Depor) no presentó informaciones relativas a la nutrición durante el tiempo estudiado. Cabe destacar que fueron seis los tipos de imprecisiones que se detectaron con mayor frecuencia. Estas fueron: (1) No especificar la cantidad recomendada de un alimento, (2) No especificar la forma de consumo adecuada de un alimento, (3) No especificar las fuentes de donde se obtuvo la información, (4) Falta de información nutricional relevante, (5) Presencia de información confusa y (6) Presencia de información errada. Conclusiones: El análisis del contenido de las informaciones relativas a la nutrición presentes en los ocho diarios de mayor lectoría de Lima Metropolitana permitió comprobar que la mayoría de éstas (75,7%) presenta imprecisiones de índole nutricional. Por ello, es evidente que se incumplen los criterios periodísticos de validación, rigor y prudencia, al momento de construir las informaciones relativas a la nutrición. Asimismo, los resultados reflejan la urgencia e importancia del trabajo conjunto entre nutricionistas y periodistas encargados de difundir noticias relacionadas al campo de la nutrición y alimentación saludable, lo cual permitirá que la población reciba la información adecuada y sea capaz de aprovecharla modificando actitudes y comportamientos en favor de su salud, teniendo en cuenta el impacto que causan los medios sobre los hábitos alimenticios de sus usuarios.
Tesis
Styles APA, Harvard, Vancouver, ISO, etc.
18

Madej, Roberta M. « The Impact of Imprecision in HCV Viral Load Test Results on Clinicians’ Therapeutic Management Decisions and on the Economic Value of the Test ». VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3259.

Texte intégral
Résumé :
Clinical laboratory test results are integral to patient management. Important aspects of laboratory tests’ contributions are the use of the test information and the role they have in facilitating efficient and effective use of healthcare resources. Methods of measuring those contributions were examined using quantitative HCV RNA test results (HCV VL) in therapeutic management decisions as a model. Test precision is important in those decisions; therefore, the clinical use was evaluated by studying the impact that knowledge of inherent assay imprecision had on clinicians’ decisions. A survey describing a simulated patient at a decision point for HCV triple-combination therapy management was sent to 1491 hepatology clinicians. Participants saw HCV RNA results at five different levels and were asked to choose to: continue therapy, discontinue therapy, or repeat the test. Test results were presented both with and without the 95% confidence intervals (CIs). Three of the VLs had CIs that overlapped the therapeutic decision level. Participants saw both sets of results in random order. Demographics and practice preferences were also surveyed. One-hundred-thirty-eight responses were received. Adherence to clinical guidelines was demonstrated in self-reported behaviors and in most decisions. However, participants chose to repeat the test up to 37% of the time. The impact of the knowledge of assay imprecision did not have a statistically significant effect on clinicians’ decisions. To determine economic value, an analytic decision-tree model was developed. Transition probabilities, costs, and Quality of Life values were derived from published literature. Survey respondents’ decisions were used as model inputs. Across all HCV VL levels, the calculated test value was approximately $2600, with up to $17,000 in treatment-related cost savings per patient at higher HCV VLs. The test value prevailed regardless of the presence or absence of CIs, and despite repeat testing. The calculated value in cost savings/patient was up to 100 times the investment for HCV VL testing. Laboratory tests are investments in efficient uses of healthcare resources. Proper interpretation and use of their information is integral to that value. This type of analysis can inform institutional decisions and higher level policy discussions.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Delgado-Fernandez, Lourdes. « Los límites de la fotografía : la imprecisión de las ruedas fotográficas para el reconocimiento de sospechosos en Estados Unidos ». Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/586192.

Texte intégral
Résumé :
En Estados Unidos, como demuestran los múltiples casos de exonerados en los últimos años gracias a las pruebas de ADN, se condena a muchas personas inocentes a partir de identificaciones visuales erróneas. La fotografía tiene en ello un papel decisivo puesto que, actualmente, las ruedas fotográficas son el método de identificación más común. Los objetivos principales de esta tesis son: primero, demostrar que la fotografía no es una herramienta infalible, ni siquiera eficaz, como prueba de identificación. Segundo, entender por qué el sistema policial y judicial estadounidense —conocedores de los fallos en que se incurre con este procedimiento— permiten el uso de las ruedas fotográficas. Y, finalmente, ofrecer argumentos desde la Psicología del reconocimiento facial y la Psicología de la percepción de personas para que los psicólogos del testimonio contemplen la necesidad de emprender nuevos estudios comparativos entre las ruedas fotográficas y las de vídeos.
In the United States, as demonstrated by the multiple cases of exonerated in recent years through DNA testing, many innocent people are convicted from misidentifications. Photography has a decisive role in this, since photo lineups are the most common method of identification today. The main objectives of this thesis are: first, to demonstrate that photography is not an infallible tool, or even effective, as a proof of identification. Second, to understand why the US police and judicial system —cognizant of the flaws incurred by this procedure— allow the use of photo lineups. And, finally, to offer arguments from the Psychology of facial recognition and Psychology of person recognition so that the eyewitness psychologists contemplate the necessity to undertake new comparative studies between photo and video lineups.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Bedin, Luis Gustavo. « Laboratórios via sistema tradicional e espectroscopia de reflectância : avaliação da qualidade analítica dos atributos do solo ». Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-09112016-112536/.

Texte intégral
Résumé :
A análise de solo é considerada ferramenta essencial para fins de recomendação de calagem, adubação e manejo do solo. Entretanto, com a demanda crescente por alimentos e a necessidade do aumento sustentável da produtividade agrícola, é fundamental seguir progredindo em termos de qualidade, custos e o tempo demandado para a obtenção dos resultados destas análises. Neste sentido, as técnicas de sensoriamento remoto, incluindo as escalas laboratoriais, de campo, aéreas e orbitais, apresentam vantagens, principalmente no que se refere à avaliação de áreas de grande extensão. A qualidade das determinações laboratoriais é fundamental para as recomendações de manejo do solo, levando ao questionamento do grau de variabilidade analítica entre diferentes laboratórios e quantificações via espectroscopia de reflectância. Objetivou-se avaliar as incertezas relacionadas às determinações da análise de solo, e como isso pode afetar nos modelos de predição espectrais (350-2.500 nm). Com isso, espera-se entender as vantagens e limitações das metodologias, permitindo assim decisões mais adequadas para o manejo do solo. Amostras de solos sob cultivo extensivo de cana de açúcar foram coletadas de 29 municípios situados no estado de São Paulo. Para a coleta dos solos foram abertos 48 perfis com aproximadamente 1,5 m de profundidade, foi retirado de cada perfil aproximadamente 10 kg de terra, nas profundidades de 0-0,2 e 0,8-1,00 m, totalizando 96 amostras primárias. Para as determinações químicas foram analisados os seguintes atributos: potencial hidrogeniônico (pH), matéria orgânica (MO), fósforo resina (P), potássio trocável (K+), cálcio trocável (Ca2+), magnésio trocável (Mg2+), alumínio trocável (Al3+), acidez potencial (H + Al), soma de bases trocáveis (SB), capacidade de troca de cátions (CTC), saturação da CTC por bases (V%) e saturação por Al3+ (m%). No que se refere às determinações granulométricas, foram analisadas as frações areia, silte e argila. Para obtenção dos espectros de reflectância, foram utilizados quatro espectrorradiômetros (350-2.500 nm). As variações das recomendações de calagem de diferentes laboratórios também foram avaliadas. Laboratórios foram avaliados com base em índices de imprecisão e inexatidão. As determinações com maiores erros em ordem decrescente, considerando a média de todos os laboratórios, foram m%, Al3+, Mg2+ e P. Esses erros influenciaram significativamente nas calibrações dos modelos de predições via sensor. Além disso, foi observado que as incertezas analíticas muitas vezes podem influenciar na recomendação de calagem. Para esta recomendação, um dos laboratórios estudados apresentou resultados com erro maior a 1 t ha-1. Os modelos de predição calibrados com os dados do laboratório com menor quantidade de erros apresentaram valor de R2 maior que 0,7 e RPD maior que 1,8, para os atributos MO, Al, CTC, H+Al, areia, silte e argila. A metodologia empregada possibilitou a quantificação do nível de incertezas aceitáveis nas determinações laboratoriais e a avaliação de como os erros analíticos laboratoriais influenciaram nas predições dos sensores. A espectroscopia de reflectância mostra ser alternativa complementar eficiente aos métodos tradicionais de análises de solo.
Soil analysis is an essential tool for liming recomendation, fertilization and soil management. Considering the increasing demand for food and the need for a sustainable increase in agricultural productivity, it is essential to promote the quality of soil analysis, as well as reducing costs and time required to obtain such analysis. In this sense, remote sensing techniques, including laboratory, field, aerial and orbital levels, have advantages especially regarding the assessment of areas of large extension. The quality of laboratory measurements is critical for soil management recommendations, which makes important to question the degree of analytical variability between different laboratories and measurements via reflectance spectroscopy. This study aimed to evaluate the uncertainties related to traditional soil analysis, and how they can affect the spectral prediction models (350-2500 nm). It is expected to understand the advantages and limitations of both methodologies, allowing proper decision-making for soil management. Soil samples under extensive sugar cane cultivation were collected from 29 municipalities in the state of São Paulo. For soil sampling, 48 soil profiles were opened in a depth of approximately 1.5 m and 10 kg of soil was collected from the depths 0-0.2 and 0.8-1.0 m, resulting in 96 primary samples. For chemical analysis the following attributes were considered: potential of Hydrogen (pH), Organic Matter (OM), phosphorus (P), exchangeable potassium (K+), exchangeable calcium (Ca2+), exchangeable magnesium (Mg2+), exchangeable aluminum (Al3+), potential acidity (H + Al), total exchangeable bases (SB), Cation Exchange Capacity (CEC), CEC saturation by bases (V%) and saturation by Al3+ (m%). Regarding the particle size measurements, the fractions sand, silt and clay were analyzed. Four spectroradiometers (350-2500 nm) were used in order to obtain the reflectance spectra. The variations of liming recommendations from different laboratories were also evaluated. Laboratories were evaluated based on imprecision and inaccuracy rates. The soil attributes that presented highest errors in the traditional analysis, based on the average of all laboratories, were in descending order m%, Al3+, Mg2+ and P. These errors significantly influenced the calibrations of the prediction models through sensors. Furthermore, the analytical uncertainties can often influence liming recommendations. For this recommendation, one of the laboratories presented results with errors greater than 1 t ha-1. The prediction models calibrated with laboratory data with fewer errors presented R2 value greater than 0.7 and RPD greater than 1.8 for OM, Al3+, CEC, H + Al, sand, silt and clay. The methodology allowed the quantification of the level of acceptable uncertainty in the laboratory measurements and the evaluation of how the laboratory analytical errors influenced the predictions of the sensors. The reflectance spectroscopy is an efficient complementary alternative to traditional methods of soil analyses.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Lefort, Sébastien. « "How much is 'about'?" modélisation computationnelle de l'interprétation cognitive des expressions numériques approximatives ». Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066421/document.

Texte intégral
Résumé :
Nos travaux portent sur les Expressions Numériques Approximatives (ENA), définies comme des expressions linguistiques impliquant des valeurs numériques et un adverbe d'approximation, telles que "environ 100". Nous nous intéressons d’abord à l’interprétation d’ENA non contextualisées, dans ses aspects humain et computationnel. Après avoir formalisé des dimensions originales, arithmétiques et cognitive, permettant de caractériser les ENA, nous avons conduit une étude empirique pour collecter les intervalles de plages de valeurs dénotées par des ENA, qui nous a permis de valider les dimensions proposées. Nous avons ensuite proposé deux modèles d'interprétation, basés sur un même principe de compromis entre la saillance cognitive des bornes des intervalles et leur distance à la valeur de référence de l’ENA, formalisé par un front de Pareto. Le premier modèle estime l’intervalle dénoté, le second un intervalle flou représentant l’imprécision associée. Leur validation expérimentale à partir de données réelles montre qu’ils offrent de meilleures performances que les modèles existants. Nous avons également montrél’intérêt du modèle flou en l’implémentant dans le cadre des requêtes flexibles de bases de données. Nous avons ensuite montré, par une étude empirique, que le contexte et les interprétations, implicite vs explicite, ont peu d’effet sur les intervalles. Nous nous intéressons enfin à l’addition et à la multiplication d’ENA, par exemple pour évaluer la surface d’une pièce d’"environ 10" par "environ 20 mètres". Nous avons mené une étude dont les résultats indiquent que les imprécisions liées aux opérandes ne sont pas prises en compte lors des calculs
Approximate Numerical Expressions (ANE) are imprecise linguistic expressions implying numerical values, illustrated by "about 100". We first focus on ANE interpretation, both in its human and computational aspects. After defining original arithmetical and cognitive dimensions allowing to characterize ANEs, we conducted an empirical study to collect the intervals of values denoted by ANEs. We show that the proposed dimensions are involved in ANE interpretation. In a second step, we proposed two interpretation models, based on the same principle of a compromise between the cognitive salience of the endpoints and their distance to the ANE reference value, formalized by Pareto frontiers. The first model estimates the denoted interval, the second one generates a fuzzy interval representing the associated imprecision. The experimental validation of the models, based on real data, show that they offer better performances than existing models. We also show the relevance of the fuzzy model by implementing it in the framework of flexible database queries. We then show, by the mean of an empirical study, that the semantic context has little effect on the collected intervals. Finally, we focus on the additions and products of ANE, for instance to assess the area of a room whose walls are "about 10" and "about 20 meters" long. We conducted an empirical study whose results indicate that the imprecisions associated with the operands are not taken into account during the calculations
Styles APA, Harvard, Vancouver, ISO, etc.
22

Plaß, Julia [Verfasser], et Thomas [Akademischer Betreuer] Augustin. « Statistical modelling of categorical data under ontic and epistemic imprecision : contributions to power set based analyses, cautious likelihood inference and (non-)testability of coarsening mechanisms / Julia Plaß ; Betreuer : Thomas Augustin ». München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2018. http://d-nb.info/116087624X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Placido, Rui. « Estimating measurement uncertainty in the medical laboratory ». Thesis, Cranfield University, 2016. http://dspace.lib.cranfield.ac.uk/handle/1826/11258.

Texte intégral
Résumé :
Medical Laboratories Accreditation is covered by ISO 15189:2012 - Medical Laboratories — Requirements for Quality and Competence. In Portugal, accreditation processes are held under the auspices of the Portuguese Accreditation Institute (IPAC), which applies the Portuguese edition (NP EN ISO 15189:2014). Accordingly, Medical Laboratories accreditation processes now require the estimate of measurement uncertainty (MU) associated to the results. The Guide to the Expression of Uncertainty in Measurement (GUM) describes the calculation of MU, not contemplating the specific aspects of medical laboratory testing. Several models have been advocated, yet without a final consensus. Given the lack of studies on MU in Portugal, especially on its application in the medical laboratory, it is the objective of this thesis to reach to a model that fulfils the IPAC’s accreditation regulations, in regards to this specific requirement. The study was based on the implementation of two formulae (MU-A and MU-B), using the Quality Management System (QMS) data of an ISO 15189 Accredited Laboratory. Including the laboratory’s two Cobas® 6000–c501 (Roche®) analysers (C1 and C2) the work focused three analytes: creatinine, glucose and total cholesterol. The MU-B model formula, combining the standard uncertainties of the method’s imprecision, of the calibrator’s assigned value and from the pre-analytical variation, was considered the one best fitting to the laboratory's objectives and to the study's purposes, representing well the dispersion of values reasonably attributable to the measurand final result. Expanded Uncertainties were: Creatinine - C1 = 9,60%; C2 = 5,80%; Glucose - C1 = 8,32%; C2 = 8,34%; Cholesterol - C1 = 4,00%; C2 = 3,54 %. ...[cont.].
Styles APA, Harvard, Vancouver, ISO, etc.
24

Flores, Díaz Felipe Alberto, et Poblete Nicolas Joaquín Ramírez. « De la incertidumbre a la precaución ; el impacto de la imprecisión en el cálculo de los daños ambientales y su tratamiento en el marco normativo de los EEUU ». Tesis, Universidad de Chile, 2015. http://www.repositorio.uchile.cl/handle/2250/130083.

Texte intégral
Résumé :
Memoria (licenciado en ciencias jurídicas y sociales)
La presente memoria pretende ser un primer acercamiento al tratamiento de uno de los principios base del Derecho Ambiental, el principio precautorio, en un ordenamiento jurídico tradicionalmente considerado reacio a su implementación: el estadounidense. Se busca abordar la extensión y peso de esta supuesta negativa a su consagración y aplicación como directriz general en materia de conflictos ambientales y conocer las consecuencias de dicha postura, a nivel judicial y legal; todo bajo el espectro de las particularidades que presenta un ordenamiento jurídico seguidor de la tradición del common law, en oposición a la tradición de derecho continental de nuestro propio ordenamiento. Abordamos como centro de esta investigación, y del mismo principio precautorio, la incertidumbre anexa a los eventuales riesgos, que podrían derivarse de las actividad económica humana y que podrían resultar en la afectación del medio ambiente y el desarrollo sustentable y pretendemos presentar un panorama sistemático y coherente de cómo se ha abordado esta problemática esencial en el ordenamiento jurídico estadounidense
Styles APA, Harvard, Vancouver, ISO, etc.
25

Ramírez, Rodríguez Laritza Tatiana. « Análisis de la relación de la imprecisión, la impropiedad, la redundancia léxica y los coloquialismos en la coherencia de los textos argumentativos de los estudiantes universitarios de segundo ciclo ». Master's thesis, Universidad Nacional Mayor de San Marcos, 2020. https://hdl.handle.net/20.500.12672/16688.

Texte intégral
Résumé :
Analiza cómo la imprecisión, la impropiedad, la redundancia léxica y los coloquialismos generan consecuencias semánticas en la coherencia de los textos de los estudiantes de una universidad privada de Lima. La investigación tuvo un enfoque cuantitativo, ya que primero se contabilizó la cantidad de errores para vincularlos estadísticamente mediante el programa SPSS con la coherencia textual. El diseño del estudio es no experimental explicativo, ya que no existe manipulación de las variables. Esta tesis reúne los resultados de la investigación de la relación de cuatro errores léxicos: la imprecisión, impropiedad, redundancia léxica y coloquialismo en la coherencia del texto medidos cuantitativamente. Los resultados de la investigación se pueden sintetizar en que hay una relación positiva entre la presencia o ausencia de la imprecisión, impropiedad, redundancia léxica y coloquialismos, y la coherencia de los textos de la muestra. Es decir, conforme ascendía el número de errores léxicos se incrementaban los textos no coherentes.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Baverel, Paul. « Development and Evaluation of Nonparametric Mixed Effects Models ». Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-144583.

Texte intégral
Résumé :
A nonparametric population approach is now accessible to a more comprehensive network of modelers given its recent implementation into the popular NONMEM application, previously limited in scope by standard parametric approaches for the analysis of pharmacokinetic and pharmacodynamic data. The aim of this thesis was to assess the relative merits and downsides of nonparametric models in a nonlinear mixed effects framework in comparison with a set of parametric models developed in NONMEM based on real datasets and when applied to simple experimental settings, and to develop new diagnostic tools adapted to nonparametric models. Nonparametric models as implemented in NONMEM VI showed better overall simulation properties and predictive performance than standard parametric models, with significantly less bias and imprecision in outcomes of numerical predictive check (NPC) from 25 real data designs. This evaluation was carried on by a simulation study comparing the relative predictive performance of nonparametric and parametric models across three different validation procedures assessed by NPC. The usefulness of a nonparametric estimation step in diagnosing distributional assumption of parameters was then demonstrated through the development and the application of two bootstrapping techniques aiming to estimate imprecision of nonparametric parameter distributions. Finally, a novel covariate modeling approach intended for nonparametric models was developed with good statistical properties for identification of predictive covariates. In conclusion, by relaxing the classical normality assumption in the distribution of model parameters and given the set of diagnostic tools developed, the nonparametric approach in NONMEM constitutes an attractive alternative to the routinely used parametric approach and an improvement for efficient data analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
27

El, Hefnawy Menatalla Maher Abdelgelil. « Essays in Empirical Asset Pricing ». Doctoral thesis, Universitat Ramon Llull, 2020. http://hdl.handle.net/10803/669236.

Texte intégral
Résumé :
Aquesta tesi pretén descobrir, de manera empírica, nous aspectes de la secció transversal dels rendiments del capital i oferir explicacions teòriques i empíriques de les seves conclusions principals. La tesi documenta nous predictors de preus i altres factors relacionats amb els nivells d’incertesa i d’imprecisió de la informació continguda en diferents mesures del risc. Al primer capítol, s’estudia si la volatilitat de la sèrie temporal del book-to-market (BM), anomenada incertesa de valor (value uncertainty, UNC) és valorada en la secció transversal dels rendiments del capital. Un factor ponderat per valor i ajustat per mida amb una posició llarga (curta) en accions d’alta (baixa) incertesa genera un alfa anualitzat del 6-8%. Aquesta prima d’incertesa de valor es veu impulsada pels resultats extraordinaris de les empreses d’UNC alta i no s’explica pels factors de risc establerts o per les característiques de l’empresa, com la dinàmica dels beneficis i dels preus, la inversió, la rendibilitat o el propi BM. A nivell agregat, la UNC està correlacionada amb els fonaments macroeconòmics i prediu els rendiments futurs del mercat, així com la seva volatilitat. En aquest capítol, també es dona una explicació racional per a la fixació del preu dels actius de la prima d’UNC no coberta. El segon capítol és una ampliació del primer i examina el poder predictiu de la incertesa de la rendibilitat (uncertainty of profitability, UP) en la secció transversal dels rendiments del capital. Una estratègia de cartera amb una posició llarga en accions d’alta volatilitat i curta en accions baixa volatilitat genera una taxa de rendiment brut anual (ajustada al risc) del 8% (10%). Les accions d’UP alta tindrien rendiments més alts en temps de més rendibilitat i menys volatilitat de mercat, i més inflació esperada, cosa que justificaria la prima documentada. Les empreses amb més incertesa sobre el creixement dels seus actius (uncertainty of asset growth, UAG) superarien aquelles amb menys incertesa sobre aquest creixement en un 7% (12%) en rendiment brut (ajustat al risc) de risc excessiu. Aquests resultats mostren la importància de la volatilitat dels factors de risc en les decisions d’inversió. Al tercer capítol, s’estudia l’impacte que té la imprecisió en l’expectativa de guanys de la direcció (management earnings guidance, IMP) sobre els rendiments del capital. L’evidència empírica revela que una IMP alta (un interval més gran en els ingressos previstos) s’associa a uns rendiments subsegüents més baixos de les accions. S’ofereixen dues explicacions complementàries per explicar aquests baixos rendiments. Primera, en un mercat que presenta limitacions a la venda en descobert i disparitat d’opinions sobre les estimacions de beneficis, una IMP alta desanima els inversors pessimistes, mentre que els optimistes creuen en el gran salt de rang i prenen posicions llargues basant-se en aquestes creences, cosa que provoca sobrevaloracions de les accions i, en darrera instància, rendibilitats més baixes. Segona, una IMP alta pot reflectir una genuïna incertesa pel que fa als guanys futurs, i això pot atreure els inversors en valor o de loteria. Les conclusions són sòlides, a nivell d’anàlisi de la cartera i de les accions, per al mesurament de la imprecisió i per a diferents models de fixació de preus dels actius.
Esta tesis pretende descubrir, de forma empírica, nuevos aspectos de la sección transversal de los rendimientos del capital y proporcionar explicaciones teóricas y empíricas de sus principales conclusiones. La tesis documenta nuevos indicadores de precios y otros factores relacionados con los niveles de incertidumbre y de imprecisión de la información contenida en distintas medidas del riesgo. En el primer capítulo, se investiga si la volatilidad de la serie temporal del book-to-market (BM), denominada incertidumbre de valor (value uncertainty, UNC) es estimada en la sección transversal de los rendimientos del capital. Un factor ponderado por valor y ajustado por tamaño con una posición larga (corta) en acciones de alta (baja) incertidumbre genera un alfa anualizado del 6-8%. Esta prima de incertidumbre de valor es impulsada por los resultados extraordinarios de las empresas de alta UNC y no se explica por los factores de riesgo establecidos o por las características de la empresa, como la tendencia de los beneficios y los precios, la inversión, la rentabilidad o el propio BM. A nivel agregado, la UNC está correlacionada con los fundamentos macroeconómicos y predice los rendimientos futuros del mercado, así como la volatilidad del mercado. En este capítulo, también se proporciona una explicación racional para la fijación del precio de los activos de la prima de UNC no cubierta. El segundo capítulo es una ampliación del primero y examina el poder predictivo de la incertidumbre de rentabilidad (uncertainty of profitability, UP) en la sección transversal de los rendimientos del capital. Una estrategia de cartera con una posición larga en acciones de alta volatilidad y corta en acciones baja volatilidad genera una tasa de rendimiento bruto anual (ajustada al riesgo) del 8% (10%). Las acciones de alta UP tendrían mayores rendimientos en tiempos de mayor rentabilidad de mercado, menor volatilidad de mercado y mayor inflación esperada que justifica la prima documentada. Las empresas con mayor incertidumbre sobre el crecimiento de sus activos (uncertainty of asset growth, UAG) superarían a aquellas con menor incertidumbre sobre el crecimiento de sus activos en un 7% (12%) en rendimiento bruto (ajustado al riesgo) de riesgo excesivo. Estos resultados muestran la importancia de la volatilidad de los factores de riesgo en las decisiones de inversión. En el tercer capítulo, se estudia el impacto que tiene la imprecisión en las expectativas de ganancias de la dirección (management earnings guidance, IMP) sobre los rendimientos del capital. La evidencia empírica revela que unas altas IMP (un mayor intervalo en los ingresos previstos) se asocian a unos rendimientos más bajos de las acciones. Se proporcionan dos explicaciones complementarias para explicar estos bajos rendimientos. Primero, en un mercado que presenta limitaciones a la venta a corto y disparidad de opiniones sobre las estimaciones de beneficios, unas altas IMP desaniman a los inversores pesimistas, mientras que los más optimistas creen en el gran salto de rango y toman posiciones largas en base a estas creencias, lo cual ocasiona sobrevaloraciones de las acciones y, en consecuencia, rentabilidades más bajas. Segundo, unas altas IMP pueden reflejar una verdadera incertidumbre con respecto a las ganancias futuras, y ello puede atraer a los inversores en valor o de lotería. Las conclusiones son sólidas, a nivel de análisis de la cartera y de los valores, para la medición de la imprecisión y para diferentes modelos de fijación de precios de los activos.
This dissertation aims at empirically uncovering new aspects of the cross-section of equity returns and providing theoretical-backed and empirical explanations of the main findings. The dissertation documents novel pricing predictors and factors related to the uncertainty and imprecision levels of the information content embedded in different risk measures. The first chapter investigates whether the time-series volatility of book-to-market (BM), called value uncertainty (UNC), is priced in the cross-section of equity returns. A size-adjusted value-weighted factor with a long (short) position in high-UNC (low-UNC) stocks generates an annualized alpha of 6-8%. This value uncertainty premium is driven by outperformance of high-UNC firms and is not explained by established risk factors or firm characteristics, such as price and earnings momentum, investment, profitability, or BM itself. At the aggregate level, UNC is correlated with macroeconomic fundamentals and predicts future market returns and market volatility. The chapter also provides a rational asset-pricing explanation of the uncovered UNC premium. The second chapter extends the first chapter and examines the predictive power of the uncertainty of profitability (UP) on the cross-section of equity returns. A portfolio strategy that goes long in the high-UP decile portfolio and short in the low-UP decile portfolio generates an annual excess raw (risk-adjusted) return of 8% (10%). High-UP stocks would have higher returns during times of higher market-wide profitability, lower market volatility, and higher expected inflation justifying the documented premium. Moreover, firms with high uncertainty surrounding their asset growth (UAG) would outperform those with low asset growth uncertainty by 7% (12%) in terms of excess raw (risk-adjusted) return. Results shed light on the importance of the volatility of risk factors in investment decisions. The third chapter examines the impact that imprecision in management earnings guidance (IMP) has on equity returns. Empirical evidence reveals that high IMP (wider interval in the forecasted earnings) is associated with lower subsequent stock returns. Two complementary explanations are provided to explain the low returns. First, in a market that exhibits short-selling constraints and diversion of opinion regarding earnings estimates, high IMP discourages pessimistic investors while optimists believe in the high bound of the range and take long positions based on these beliefs, leading to stocks' overpricing and hence to lower subsequent returns. Second, high IMP may reflect genuine uncertainty regarding future earnings appealing to growth and lottery investors. Findings are robust at the portfolio and stock level of analysis, to the measurement of imprecision, and to different asset pricing models.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Girres, Jean-François. « Modèle d'estimation de l'imprécision des mesures géométriques de données géographiques ». Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1080/document.

Texte intégral
Résumé :
De nombreuses applications SIG reposent sur des mesures de longueur ou de surface calculées à partir de la géométrie des objets d'une base de données géographiques (comme des calculs d'itinéraires routiers ou des cartes de densité de population par exemple). Cependant, aucune information relative à l'imprécision de ces mesures n'est aujourd'hui communiquée à l'utilisateur. En effet, la majorité des indicateurs de précision géométrique proposés porte sur les erreurs de positionnement des objets, mais pas sur les erreurs de mesure, pourtant très fréquentes. Dans ce contexte, ce travail de thèse cherche à mettre au point des méthodes d'estimation de l'imprécision des mesures géométriques de longueur et de surface, afin de renseigner un utilisateur dans une logique d'aide à la décision. Pour répondre à cet objectif, nous proposons un modèle permettant d'estimer les impacts de règles de représentation (projection cartographique, non-prise en compte du terrain, approximation polygonale des courbes) et de processus de production (erreur de pointé et généralisation cartographique) sur les mesures géométriques de longueur et de surface, en fonction des caractéristiques des données vectorielles évaluées et du terrain que ces données décrivent. Des méthodes d'acquisition des connaissances sur les données évaluées sont également proposées afin de faciliter le paramétrage du modèle par l'utilisateur. La combinaison des impacts pour produire une estimation globale de l'imprécision de mesure demeure un problème complexe et nous proposons des premières pistes de solutions pour encadrer au mieux cette erreur cumulée. Le modèle proposé est implémenté au sein du prototype EstIM (Estimation de l'Imprécision des Mesures)
Many GIS applications are based on length and area measurements computed from the geometry of the objects of a geographic database (such as route planning or maps of population density, for example). However, no information concerning the imprecision of these measurements is now communicated to the final user. Indeed, most of the indicators on geometric quality focuses on positioning errors, but not on measurement errors, which are very frequent. In this context, this thesis seeks to develop methods for estimating the imprecision of geometric measurements of length and area, in order to inform a user for decision support. To achieve this objective, we propose a model to estimate the impacts of representation rules (cartographic projection, terrain, polygonal approximation of curves) and production processes (digitizing error, cartographic generalisation) on geometric measurements of length and area, according to the characteristics and the spatial context of the evaluated objects. Methods for acquiring knowledge about the evaluated data are also proposed to facilitate the parameterization of the model by the user. The combination of impacts to produce a global estimation of the imprecision of measurement is a complex problem, and we propose approaches to approximate the cumulated error bounds. The proposed model is implemented in the EstIM prototype (Estimation of the Imprecision of Measurements)
Styles APA, Harvard, Vancouver, ISO, etc.
29

Tomás, Sánchez José Enrique de. « Tres décadas de Evaluación del Impacto Ambiental en España. Revisión, necesidad y propuestas para un cambio de paradigma ». Doctoral thesis, Universidad de Alicante, 2014. http://hdl.handle.net/10045/48910.

Texte intégral
Résumé :
Desde la implantación de la Evaluación del Impacto Ambiental en España, hace ya más de tres décadas, ni los procedimientos ni el concepto mismo parecen haber experimentado evolución alguna, al menos a mejor. Varios factores han contribuido a que, en la actualidad, la EIA se haya convertido en poco más que un requisito administrativo, en “una cosa más” que pedir a los promotores que pretendan implementar sus proyectos. La verdadera importancia de una EIA consistente y orientada a la protección y conservación del medio ambiente ha sido relegada a una posición de languidecimiento como principio y de obstáculo como requisito para el desarrollo. Las causas deben buscarse entre varias razones: - La apatía de la Administración y su propio interés en que el Medio Ambiente no tenga la entidad que la sociedad demanda (de lo cual es buena muestra el que en ninguna de las Administraciones del Estado, este goce de identidad propia, sino que se encuentra supeditado a otros principios políticamente superiores, como el urbanismo o la industria, de quienes es subsidiario en todos los casos) o de que sus competencias estén repartidas entre esas otras instancias “de rango superior”. - La confusión normativa, distinta (a veces muy distinta) en las diferentes Comunidades Autónomas. - La supeditación (completamente artificial y equivocada) de lo medioambiental a lo técnico. - La falta de medios humanos de la administración. - La falta de preparación, tanto entre el personal de la Administración como entre los profesionales del medio ambiente dedicados a la EIA. - La bajísima calidad promedio de los Estudios de Impacto Ambiental que se vienen presentando ante la Administración. En este último punto, el único que consideramos está en nuestras manos el contribuir a paliar, es en el que nos hemos centrado en nuestro trabajo, dividido en tres partes: 1. Trabajo de campo: Se evaluaron un total de 127 EsIA que, a falta de mayor colaboración por parte de la Administración, hubieron de ser obtenidos fundamentalmente de internet. De ellos, 77 se consideraron adecuados para su evaluación. 2. Procesos de evaluación de alternativas: Dada la habitual falta de un procedimiento claro y consistente de participación pública y elección de la mejor alternativa viable, exponemos algunos métodos matemáticamente consistentes de apoyo a la toma de decisiones. 3. Metodología de evaluación de impactos ambientales: En la actualidad, la metodología de evaluación de impactos más ampliamente utilizada es la llamada de los “números crisp”, o “números precisos”. Argumentamos la falta de consistencia matemática del procedimiento y proponemos la utilización de métodos basados en la lógica difusa; diseñamos, construimos y probamos un sistema de inferencia difusa al que llamamos SIDEIA, y proponemos su utilización como medio de incorporar la ineludible subjetividad, imprecisión e incertidumbre subyacente en gran cantidad de los datos relativos al medio ambiente necesarios para realizar la evaluación de impactos.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Graba, Farès. « Méthode non-additive intervalliste de super-résolution d'images, dans un contexte semi-aveugle ». Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS198/document.

Texte intégral
Résumé :
La super-résolution est une technique de traitement d'images qui consiste en la reconstruction d'une image hautement résolue à partir d'une ou plusieurs images bassement résolues.Cette technique est apparue dans les années 1980 pour tenter d'augmenter artificiellement la résolution des images et donc de pallier, de façon algorithmique, les limites physiques des capteurs d'images.Comme beaucoup des techniques de reconstruction en traitement d'images, la super-résolution est connue pour être un problème mal posé dont la résolution numérique est mal conditionnée. Ce mauvais conditionnement rend la qualité des images hautement résolues reconstruites très sensible au choix du modèle d'acquisition des images, et particulièrement à la modélisation de la réponse impulsionnelle de l'imageur.Dans le panorama des méthodes de super-résolution que nous dressons, nous montrons qu'aucune des méthodes proposées par la littérature ne permet de modéliser proprement le fait que la réponse impulsionnelle d'un imageur est, au mieux, connue de façon imprécise. Au mieux l'écart existant entre modèle et réalité est modélisé par une variable aléatoire, alors que ce biais est systématique.Nous proposons de modéliser l'imprécision de la connaissance de la réponse impulsionnelle par un ensemble convexe de réponses impulsionnelles. L'utilisation d'un tel modèle remet en question les techniques de résolution. Nous proposons d'adapter une des techniques classiques les plus populaires, connue sous le nom de rétro-projection itérative, à cette représentation imprécise.L'image super-résolue reconstruite est de nature intervalliste, c'est à dire que la valeur associée à chaque pixel est un intervalle réel. Cette reconstruction s'avère robuste à la modélisation de la réponse impulsionnelle ainsi qu'à d'autres défauts. Il s'avère aussi que la largeur des intervalles obtenus permet de quantifier l'erreur de reconstruction
Super-resolution is an image processing technique that involves reconstructing a high resolution image based on one or several low resolution images. This technique appeared in the 1980's in an attempt to artificially increase image resolution and therefore to overcome, algorithmically, the physical limits of an imager.Like many reconstruction problems in image processing, super-resolution is known as an ill-posed problem whose numerical resolution is ill-conditioned. This ill-conditioning makes high resolution image reconstruction qualityvery sensitive to the choice of image acquisition model, particularly to the model of the imager Point Spread Function (PSF).In the panorama of super-resolution methods that we draw, we show that none of the methods proposed in the relevant literature allows properly modeling the fact that the imager PSF is, at best, imprecisely known. At best the deviation between model and reality is considered as being a random variable, while it is not: the bias is systematic.We propose to model scant knowledge on the imager's PSF by a convex set of PSFs. The use of such a model challenges the classical inversion methods. We propose to adapt one of the most popular super-resolution methods, known under the name of "iterative back-projection", to this imprecise representation. The super-resolved image reconstructed by the proposed method is interval-valued, i.e. the value associated to each pixel is a real interval. This reconstruction turns out to be robust to the PSF model and to some other errors. It also turns out that the width of the obtained intervals quantifies the reconstruction error
Styles APA, Harvard, Vancouver, ISO, etc.
31

Azevedo, Ricardo Rocha de. « Imprecisão na estimação orçamentária dos municípios brasileiros ». Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/96/96133/tde-17032014-110156/.

Texte intégral
Résumé :
A pesquisa analisou o grau de imprecisão orçamentária dos municípios brasileiros, e sugeriu fatores que estariam associados à imprecisão. A importância da análise da precisão do orçamento é reconhecida por organismos internacionais como o Banco Mundial e OCDE, que têm desenvolvido mecanismos de acompanhamento da qualidade do orçamento público. O orçamento público é o instrumento de estimação e alocação de recursos em ações que foram priorizadas pelos agentes da administração pública para concretizar sua plataforma de governo proposta na campanha. Assim, o orçamento sinaliza aos cidadãos as políticas públicas propostas na campanha, assim como as ações específicas que que serão futuramente executadas. Além disso, o orçamento fornece importantes informações sobre o nível de endividamento e a proporção de investimentos do município. A imprecisão na estimação de receitas e despesas no orçamento distorce a alocação planejada colocando em risco a execução do plano, e também reduz a capacidade do próprio governo em planejar as suas ações. A falta de incentivos para buscar a precisão, dada a baixa cobrança pelos órgãos de controle externo e pelos mecanismos de controle social, pode levar a erros e à baixa atenção ao processo orçamentário nos municípios. A literatura anterior têm concentrado esforços em estudar a transparência, a participação popular e técnicas de previsão das receitas, mas pouco tem tratado o processo de alocação de recursos. Os resultados da pesquisa mostram que (i) o controle legislativo tem alguma associação com a diminuição da imprecisão do orçamento em municípios nos quais o Prefeito não tem a maioria da Câmara; (ii) o controle externo não possui relação com a imprecisão.
The research examined the degree of budget inaccuracy of Brazilian municipalities, and suggested factors associated to vagueness. The importance of analyzing the budget accuracy is recognized by international bodies such as the World Bank and OECD, who have developed mechanisms to monitor the quality of the public budget. The public budget is the instrument of estimation and resource allocation in stocks that have been prioritized by the agents of public administration to implement their platform of government proposed in the campaign. Thus, the budget signals to citizens the public policies proposed in the campaign, as well as the specific actions that will be implemented in the future. In addition, the budget provides important information about the level of debt and the proportion of investments of the municipality. The imprecision in estimating revenues and expenses in the budget distorts the allocation planned endangering the implementation of the plan, and also reduces the government\'s ability to plan their own actions. The lack of incentives to seek accuracy, given the low charge by external control bodies and the mechanisms of social control, can lead to errors and low attention to the budgetary process in the municipalities. The previous literature has focused efforts on studying transparency, popular participation and revenue forecasting techniques, but little has handled the process of resource allocation. The survey results show that (i) the legislative control has some association with the decrease in the budget inaccuracy in municipalities where the mayor does not have the majority of the Board; (ii) external control has no relationship with the inaccuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Michelucci, Dominique. « Les représentations par les frontières : quelques constructions ; difficultés rencontrées ». Saint-Etienne, EMSE, 1987. http://tel.archives-ouvertes.fr/docs/00/83/03/69/PDF/1987_Michelucci_Dominique.pdf.

Texte intégral
Résumé :
La synthèse d'images et la CAO utilisent diverses modélisations des solides. Les représentations par les frontières "Boundary Representations" sont l'une d'elles. Leurs constructions se heurtent à plusieurs difficultés : l'imprécision numérique dont les conséquences néfastes ont peut être été sous-estimées, les possibles incohérences (comment être sûr qu'une représentation par frontières décrit bien un solide, au sens physique du terme?) provoquées par les imprécisions numériques et/ou la redondance des représentations par les frontières, et enfin le foisonnement des cas particuliers. Cette thèse détaille les difficultés et quelqes solutions nouvelles.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Assaghir, Zainab. « Analyse formelle de concepts et fusion d'informations : application à l'estimation et au contrôle d'incertitude des indicateurs agri-environnementaux ». Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2010. http://tel.archives-ouvertes.fr/tel-00587784.

Texte intégral
Résumé :
La fusion d'informations consiste à résumer plusieurs informations provenant des différentes sources en une information exploitable et utile pour l'utilisateur. Le problème de la fusion est délicat surtout quand les informations délivrées sont incohérentes et hétérogènes. Les résultats de la fusion ne sont pas souvent exploitable et utilisables pour prendre une décision, quand ils sont imprécis. C'est généralement due au fait que les informations sont incohérentes. Plusieurs méthodes de fusion sont proposées pour combiner les informations imparfaites et elles appliquent l'opérateur de fusion sur l'ensemble de toutes les sources et considèrent le résultat tel qu'il est. Dans ce travail, nous proposons une méthode de fusion fondée sur l'Analyse Formelle de Concepts, en particulier son extension pour les données numériques : les structures de patrons. Cette méthode permet d'associer chaque sous-ensemble de sources avec son résultat de fusion. Toutefois l'opérateur de fusion est choisi, alors un treillis de concept est construit. Ce treillis fournit une classification intéressante des sources et leurs résultats de fusion. De plus, le treillis garde l'origine de l'information. Quand le résultat global de la fusion est imprécis, la méthode permet à l'utilisateur d'identifier les sous-ensemble maximaux de sources qui supportent une bonne décision. La méthode fournit une vue structurée de la fusion globale appliquée à l'ensemble de toutes les sources et des résultats partiels de la fusion marqués d'un sous-ensemble de sources. Dans ce travail, nous avons considéré les informations numériques représentées dans le cadre de la théorie des possibilités et nous avons utilisé trois sortes d'opérateurs pour construire le treillis de concepts. Une application dans le monde agricole, où la question de l'expert est d'estimer des valeurs des caractéristiques de pesticide provenant de plusieurs sources, pour calculer des indices environnementaux est détaillée pour évaluer la méthode de fusion proposée.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Law, William Sauway. « Evaluating imprecision in engineering design ». Thesis, 1996. https://thesis.library.caltech.edu/3132/1/Law_ws_1996.pdf.

Texte intégral
Résumé :
Imprecision is uncertainty that arises because of vague or incomplete information. Preliminary design information is characteristically imprecise: specifications and requirements are subject to change, and the design description is vague and incomplete. Yet many powerful evaluation tools, including finite element models, expect precisely specified data. Thus it is common for engineers to evaluate promising designs one by one. Alternatively, optimization may be used to search for the single "best" design. These approaches focus on individual, precisely specified points in the design space and provide limited information about the full range of acceptable designs. An alternative approach would be to evaluate sets of designs. The method of imprecision uses the mathematics of fuzzy sets in order to represent imprecision as preferences among designs: • Functional requirements model the customer's direct preference on performance variables based on performance considerations: the quantified aspects of design performance represented by performance variables. • Design preferences model the customer's anticipated preference on design variables based on design considerations: the unquantified aspects of design performance not represented by performance variables. Design preferences provide a formal structure for representing "soft" issues such as aesthetics and manufacturability and quantifying their consequences. This thesis describes continuing work in bringing the method of imprecision closer to implementation as a decision-making methodology for engineering design. The two principal contributions of this work are a clearer interpretation of the elements that comprise the method and a more efficient computational implementation. The proposed method for modeling design decisions in the presence of imprecision is defined in detail. The decision-maker is modeled as a hierarchy of preference aggregation operations. Axioms for rational design decision-making are used to define aggregation operations that are suitable for design. An electric vehicle design example illustrates the method. In particular, the process of determining preferences and a preference aggregation hierarchy is shown to be both feasible and informative. Efficient computational methods for performing preference calculations are introduced. These methods use experiment design to explore the design space and optimization assisted by linear approximation to map preferences. A user-specified fractional precision allows the number of function evaluations to be traded-off against the quality of the answer obtained. The computational methods developed are verified on design problems from aircraft engine development and automobile body design. Procedures for specifying preferences and group decisionmaking are described. These procedures provide not only a pragmatic interpretation of the method, but also an informal solution to the problem of bargaining: prerequisites for bringing the method to design problems in the real world.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Wang, Xiaoou. « Set mapping in the method of imprecision ». Thesis, 2003. https://thesis.library.caltech.edu/3884/1/main.pdf.

Texte intégral
Résumé :
The Method of Imprecision, or MoI, is a semi-automated set-based approach which uses mathematics of fuzzy sets to aid the designer making decisions with imprecise information in the preliminary design stage. The Method of Imprecision uses preference to represent the imprecision in engineering design. The preferences are specified both in the design variable space (DVS) and the performance variable space (PVS). To reach the overall preference which is needed to evaluate designs, the mapping between the DVS and the PVS should be explored. Many engineering design tools can only produce precise results with precise specifications, and usually the cost is high. In the preliminary stage, the specifications are imprecise and resources are limited. Hence, it is not cost-effective nor necessary to use these engineering design tools directly to study the mapping between the DVS and the PVS. An interpolation model is introduced to the MoI to construct metamodels for the actual mapping function between the DVS and the PVS. Due to the nature of engineering design, multistage metamodels are needed. Experimental design is used to choose design points for the first metamodel. In order to find an efficient way to choose design points when a priori information is available, many sampling criteria are discussed and tested on two specific examples. The difference between different sampling criteria when the number of added design points is small, while more design points do improve the accuracy of the metamodel substantially. The metamodels can be used to induce preferences in the DVS or the PVS according to the extension principle. The Level Interval Algorithm (LIA) is a discrete approximate implementation of the extension principle. The resulting preference by the LIA is presented as an alpha-cut, which is the set of designs or performances with a certain level of preference. There are some limitations of the LIA, especially for multidimensional DVS and PVS. A new extension of the LIA is proposed to compute alpha-cuts with more accuracy and less limitations. The designers have more control over the trade-off between the cost and accuracy of the computation with the new extension of the LIA. The results of the Method of Imprecision should be the set of alternative designs in the DVS at a certain preference level, and the set of achievable performances in the PVS. The information about preferences in the DVS and the PVS is needed to transfer back and forth. Usually the mapping from the PVS to the DVS is unavailable, while it is needed to induce preference in the DVS from the PVS. A new method is constructed to compute the alpha-cuts in both spaces from preferences specified in the DVS and the PVS. Finally, a new measure is proposed to find the most cost-effective sampling region of new design points for a metamodel. Also, the full implementation of the Method of Imprecision is listed in detail. Then it is applied to an example of the structure design of a passenger vehicle, and comparisons are made between the new results and previous results.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie