Dissertations / Theses on the topic 'Inconsistency management'

To see the other types of publications on this topic, follow the link: Inconsistency management.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Inconsistency management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jahnke, Jens H. "Management of uncertainty and inconsistency in database reengineering processes." [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=961979909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Qiuming. "Viewpoints consistency management using belief merging operators." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041222.125858/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dziewulski, Paweł. "Essays on time-inconsistency and revealed preference." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:e412f41a-07ef-4fdc-84cf-9862a53c7fbd.

Full text
Abstract:
This thesis concerns three important issues related to the problem of time-inconsistency in decision-making and revealed preference analysis. The first chapter focuses on the welfare properties of equilibria in exchange economies with time-dependent preferences. We reintroduce the notion of time-consistent overall Pareto efficiency proposed by Herings and Rohde (2006) and show that, whenever the agents are sophisticated, any equilibrium allocation is efficient in this sense. Thereby, we present a version of the First Fundamental Welfare Theorem for this class of economies. Moreover, we present a social welfare function with maximisers that coincide with the efficient allocations and prove that every equilibrium can be represented by a solution to the social welfare optimisation problem. In the second chapter we concentrate on the observable implications of various models of time-preference. We consider a framework in which subjects are asked to choose between pairs consisting of a monetary payment and a time-delay at which the payment is delivered. Given a finite set of observations, we are interested under what conditions the choices of an individual agent can be rationalised by a discounted utility function. We develop an axiomatic characterisation of time-preference with various forms of discounting, including weakly present-biased, quasi-hyperbolic, and exponential, and determine the testable restrictions for each specification. Moreover, we discuss possible identification issues that may arise in this class of tests. Finally, in the third chapter, we discuss the testable restrictions for production technologies that exhibit complementarities. Suppose that we observe a finite number of choices of input factors made by a single firm, as well as the prices at which they were acquired. Under what conditions imposed on the set of observations is it possible to justify the decisions of the firm by profit-maximisation with production complementarities? In this chapter, we develop an axiomatic characterisation of such behaviour and provide an easy-to-apply test for the hypothesis which can be employed in an empirical analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Corea, Carl [Verfasser], Patrick [Gutachter] Delfmann, Matthias [Gutachter] Thimm, and Jan [Gutachter] Mendling. "Handling Inconsistency in Business Rule Bases / Carl Corea ; Gutachter: Patrick Delfmann, Matthias Thimm, Jan Mendling." Koblenz, 2021. http://d-nb.info/1225743869/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tahrat, Sabiha. "Data inconsistency detection and repair over temporal knowledge bases." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5209.

Full text
Abstract:
Cette thèse étudie la faisabilité du raisonnement automatique sur des bases de connaissances DL-Lite temporelles TDL-Lite. Dans la première partie, nous avons traduit les bases de connaissances exprimées en TDL-Lite en logique temporelle de première ordre et en logique temporelle linéaire LTL qui sont munies de raisonneurs temporels permettant de vérifier leur satisfiabilité. Nous avons réalisé diverses expériences pour analyser les performances et la robustesse des différents raisonneurs sur des scénarios jouets et sur des bases de connaissances TDL-Lite synthétiques de tailles variables. Par ailleurs, lors du traitement des bases de connaissances avec une composante Assertionnel ABox de grande taille, nous avons également proposé une approche d’abstraction des assertions temporelles afin d’améliorer la scalabilité du raisonnement. Nous avons mené plusieurs tests pour évaluer l’efficacité de l’abstraction en mesurant le gain en termes de nombre d’assertions et d’individus dans la ABox. En outre, nous avons mesuré le nouveau temps d’exécution de quelques raisonneurs sur de telles bases de connaissances résumées. Enfin, dans l’objectif de faire de l’utilisation des bases de connaissances en TDL-Lite une réalité, nous avons présenté un outil complet avec une interface graphique qui permet de les concevoir. Notre interface est basée sur des principes de modélisation conceptuelle et elle est intégrée à notre outil de traduction et aux différents raisonneurs temporels. En considérant la ABox comme source d’incohérence, nous avons, dans la deuxième partie de la thèse, traité le problème de gestion des données incohérentes dans les bases de connaissances en TDL-Lite. En effet, nous avons proposé une approche de réparation de la ABox. Il s’agit du premier travail sur la réparation appliquée aux bases de connaissances en logiques de description temporelles. Pour ce faire, nous avons d’abord détecté et localisé les assertions temporelles sources d’incohérence et nous avons ensuite proposé une réparationtemporelle de ces données. Pour la détection, nous avons proposé une traduction des bases de connaissances de TDL-Lite vers DL-Lite; ce qui a permis d’utiliser des raisonneurs de la logique de description hautement optimisés et capables d’apporter une explication précise de l’incohérence. A partir de l’explication obtenue, nous avons ensuite proposé une méthode pour calculer automatiquement la meilleure réparation temporelle en fonction: a) des prédicats rigides, invariants dans le temps, autorisés dans la définition de la base de connaissances et b) de l’ordre temporel des assertions
We investigate the feasibility of automated reasoning over temporal DL-Lite (TDL-Lite) knowledge bases (KBs). We translate TDL-Lite KBs into a fragment of FO-logic and into LTL and apply off-the-shelf LTL and FO-based reasoners for checking the satisfiability. We conduct various experiments to analyse the runtime performance of different reasoners on toy scenarios and on randomly generated TDL-Lite KBs as well as the size of the LTL translation. To improve the reasoning performance when dealing with large ABoxes, our work also proposes an approach for abstracting temporal assertions in KBs. We run several experiments with this approach to assess the effectiveness of the technique by measuring the gain in terms of the size of the translation, the number of ABox assertions and individuals. We also measure the new runtime of some solvers on such abstracted KBs. Lastly, in an effort to make the usage of TDL-Lite KBs a reality, we present a fully-fledged tool with a graphical interface to design them. Our interface is based on conceptual modeling principles, and it is integrated with our translation tool and a temporal reasoner. In this thesis, we also address the problem of handling inconsistent data in Temporal Description Logic (TDL) knowledge bases. Considering the data part of the knowledge base as the source of inconsistency over time, we propose an ABox repair approach. This is the first work handling the repair in TDL Knowledge bases. To do so, our goal is two folds: 1) detect temporal inconsistencies and 2) propose a data temporal repair. For the inconsistency detection, we propose a reduction approach from TDL to DL which allows to provide a tight NP-complete upper bound for TDL concept satisfiability and to use highly optimized DL reasoners that can bring precise explanation (the set of inconsistent data assertions). Thereafter, from the obtained explanation, we propose a method for automatically computing the best repair in the temporal setting based on the allowed rigid predicates and the time order of assertions
APA, Harvard, Vancouver, ISO, and other styles
6

Herzig, Sebastian J. I. "A Bayesian learning approach to inconsistency identification in model-based systems engineering." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53576.

Full text
Abstract:
Designing and developing complex engineering systems is a collaborative effort. In Model-Based Systems Engineering (MBSE), this collaboration is supported through the use of formal, computer-interpretable models, allowing stakeholders to address concerns using well-defined modeling languages. However, because concerns cannot be separated completely, implicit relationships and dependencies among the various models describing a system are unavoidable. Given that models are typically co-evolved and only weakly integrated, inconsistencies in the agglomeration of the information and knowledge encoded in the various models are frequently observed. The challenge is to identify such inconsistencies in an automated fashion. In this research, a probabilistic (Bayesian) approach to abductive reasoning about the existence of specific types of inconsistencies and, in the process, semantic overlaps (relationships and dependencies) in sets of heterogeneous models is presented. A prior belief about the manifestation of a particular type of inconsistency is updated with evidence, which is collected by extracting specific features from the models by means of pattern matching. Inference results are then utilized to improve future predictions by means of automated learning. The effectiveness and efficiency of the approach is evaluated through a theoretical complexity analysis of the underlying algorithms, and through application to a case study. Insights gained from the experiments conducted, as well as the results from a comparison to the state-of-the-art have demonstrated that the proposed method is a significant improvement over the status quo of inconsistency identification in MBSE.
APA, Harvard, Vancouver, ISO, and other styles
7

MEKURIA, DAGMAWI NEWAY. "Smart Home Reasoning Systems: From a Systematic Analysis Towards a Hybrid Implementation for the Management of Uncertainty and Inconsistency." Doctoral thesis, Università Politecnica delle Marche, 2020. http://hdl.handle.net/11566/274608.

Full text
Abstract:
Si definisce “smart home” una residenza equipaggiata con tecnologie per favorire il monitoraggio dei residenti, promuovere la loro indipendenza e aumentare la qualità della loro vita. In generale, le smart home controllano le operazioni all’interno dell’ambiente domestico e si adattano automaticamente alle esigenze degli abitanti. I sistemi di reasoning per smart home (“Smart Home Reasoning Systems” – SHRS) hanno il compito di regolare il controllo automatico esercitato dalle smart home e di implementarne l’adattabilità agli abitanti. Per questo, gli SHRS sono stati oggetto di un’ampia ricerca. Tuttavia, c’è ancora un’evidente mancanza di uno studio sistematico sulle caratteristiche e le specifiche di tali sistemi. Per colmare il gap, la prima parte di questa tesi esplora il dominio degli SHRS. Abbiamo pertanto applicato il metodo della “Systematic Literature Review” (SLR) fornendo un’analisi approfondita di 135 articoli scientifici, selezionati tra i risultati di una ricerca sia automatica che manuale condotta su sei database online. A partire dalla SLR, questa tesi mostra come il 43% delle smart home presentate in letteratura sono progettate per fornire servizi generici di domotica. La SLR presenta anche dodici requisiti principali e caratteristiche per gli SHRS. Inoltre, la SLR classifica il 55,5% dei contributi analizzati come teorici. Tra tutti i contributi, il 51,5% risulta basato su tecniche di intelligenza artificiale simbolica. In aggiunta, grazie alla SLR, questa tesi analizza come differenti tecniche di reasoning sono applicate nelle smart home, valutandone semplificazioni, punti di forza e limiti e identificandone le principali sfide in ambiente domestico. Infine, la SLR evidenzia l’importanza di sviluppare sistemi di reasoning ibridi e la necessità di gestire incertezza e inconsistenze all’interno degli SHRS. Incertezza e inconsistenze caratterizzano gli obiettivi e le attività dei vari abitanti, che possono essere in condizione di sovrapposizione, di contemporaneità e/o di conflitto. Infatti, la SLR identifica il reasoning in condizioni di incertezza come una delle maggiori sfide per gli SHRS. L’incertezza è inevitabile nelle smart home: per esempio, i sensori a disposizione potrebbero fornire dati inaccurati o incompleti al fine di preservare la privacy degli occupanti. Inoltre, la natura dinamica di un ambiente domestico, così come comandi vaghi da parte degli abitanti, possono risultare in informazioni ambigue, incomplete e/o inconsistenti, portando la smart home in una condizione di incertezza. A partire da queste considerazioni, la seconda parte della tesi affronta alcune delle sfide legate all’incertezza: in particolare, l’incertezza dovuta a comunicazioni e comandi vaghi da parte degli utenti e l’informazione incompleta nei contesti di “ambient intelligence”. A tale scopo, abbiamo proposto un’architettura basata su sistemi multi-agente e ragionamento probabilistico per la realizzazione di un sistema di reasoning in condizioni di incertezza all’interno delle smart home. L’architettura proposta si basa sulla definizione di sistema multi-agente e su tecniche di “probabilistic logic programming”. In questa parte della tesi mostriamo come le tecniche di ragionamento probabilistico permettono agli agenti del sistema di ragionare in condizioni di incertezza. Inoltre, analizziamo come gli agenti intelligenti potenziano i propri processi di “decision-making” scambiando tra loro informazioni su dati mancanti o variabili non osservabili grazie a protocolli di interazione standard. Al contempo, quando un agente non ha risorse computazionali sufficienti per completare il proprio processo di reasoning, può avvantaggiarsi dei protocolli di interazione per delegare il ragionamento agli altri agenti nel sistema. In generale, dimostriamo che la combinazione di tecnologie per i sistemi multi-agente e tecniche di probabilistic logic programming può contribuire a costruire un sistema di reasoning affidabile anche a fronte di comandi vaghi degli abitanti e di informazione incompleta in ambienti parzialmente osservabili. Nell’ultima parte della tesi abbiamo affrontato i problemi di inconsistenza all’interno degli SHRS basati su regole, identificandone cinque sorgenti principali, Nello specifico, definiamo, formalizziamo e dimostriamo come regole in conflitto, duplicate, sovrapposte, in auto-loop e circolari possono essere rilevate usando le “Satisfiability Modulo Theories” (SMT). Il metodo proposto è stato validato sperimentalmente usando come modello regole raccolte in smart home esistenti. I risultati sperimentali forniscono prove promettenti a sostegno dell’affidabilità e dell’efficacia della soluzione proposta. Inoltre, il metodo proposto in questa parte della tesi può avere diverse applicazioni. Per prima cosa, può essere usato per costruire uno strumento di verifica statico (offline) per i sistemi di reasoning basati su regole. In aggiunta, può essere integrato come un componente di validazione delle regole per un sistema di reasoning. Infine, con alcuni accorgimenti, il metodo proposto può essere usato per la verifica delle proprietà di consistenza di un sistema di reasoning in domini diversi dalle smart home.
A smart home is a residence equipped with technologies that facilitate monitoring of residents, promote independence and increase the quality of life. In general, smart homes control the operations of the home environment and automatically adapt it to its inhabitants’ needs. The smart home reasoning system (SHRS) is in charge of determining the automatic control and adaptation operations of the home system. Recently, there has been extensive research concerning different aspects of the SHRS. However, there is a clear lack of systematic investigation targeted at these systems. To close the gap, in the first part of this thesis we explore the SHRS domain. For this reason, we applied the systematic literature review (SLR) method by conducting automatic and manual searches on six electronic databases, and in-depth analysis of 135 articles from the literature. From the SLR, this thesis identifies that about 43% of smart homes are designed to provide general home automation services. It also presents twelve major requirements and features of the SHRS. In addition, the SLR finds out that 55.5% of the research contributions in SHRS domain are theoretical, and 51.5% of them are based on symbolic artificial intelligence techniques. Further, it characterizes the usage and application trends of different reasoning techniques in smart home domain, and evaluates the major assumptions, strengths, and limitations of the proposed systems in the literature. Additionally, it discusses the challenges of reasoning in smart home environments. Finally, it underlines the importance of utilizing hybrid reasoning approaches and the need to handle uncertainty and inconsistency issues of the SHRS, as well as overlapping, simultaneous and conflicting multiple inhabitants’ activities and goals in the smart home environment. The SLR identifies reasoning under uncertainty as one of the major challenges of SHRSs. Uncertainty is inevitable in smart home environments as sensors may read inaccurate data or due to the existence of unobserved variables for privacy reasons. Furthermore, the dynamic nature of the home environment and vague human communications may result in ambiguous, incomplete and inconsistent contextual information, which ultimately lead the smart home system into uncertainty. With this in mind, the second part of this thesis tackle some of the challenges of uncertainty, in particular, uncertainty due to vague human communication and missing information in ambient intelligence environments. For this, we proposed probabilistic multi-agent system architecture for reasoning under uncertainty in smart home environments. The proposed smart architecture is based on the notion of multi-agent systems (MAS) technologies and probabilistic logic programming techniques. Afterwards, we show how the probabilistic reasoning technique enables the agents to reason under uncertainty. Furthermore, we discuss how intelligent agents enhance their decision-making process by exchanging information about missing data or unobservable variables using agent interaction protocols. Besides, when an agent lacks the necessary computational resources to accomplish its reasoning tasks, we illustrate how it can take advantage of the interaction protocols and delegate the tasks for other agents in the system. In general, we demonstrate that the combination of MAS technologies and probabilistic logic programming can help in building a reasoning system, which is capable of performing well under vague inhabitant commands and missing information in a partially observable environment. In the final part of the thesis, we tackled inconsistency issues in SHRSs, by identifying five major sources of inconsistencies in rule-based SHRSs. Specifically, we define, formalize and demonstrate how conflicting, duplicate, overlapping, self-looping and circular rules in SHRSs can be detected using satisfiability modulo theories. The proposed method was validated empirically using rules collected from a real-world SHRS as a model. The experimental results provide compelling evidence for the reliability and effectiveness of the proposed solution. The method presented in this part of the thesis can have multiple applications. First, it can be used to build a static (off-line) rule-based reasoning system verification tool. Second, it can be integrated as a rule validation component of the reasoning system. Besides, with some adaptation, the method can be directly used to verify the consistency properties of reasoning systems in other domains.
APA, Harvard, Vancouver, ISO, and other styles
8

Dam, Khanh Hoa, and s3007289@student rmit edu au. "Supporting Software Evolution in Agent Systems." RMIT University. Computer Science and Information Technology, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090319.143847.

Full text
Abstract:
Software maintenance and evolution is arguably a lengthy and expensive phase in the life cycle of a software system. A critical issue at this phase is change propagation: given a set of primary changes that have been made to software, what additional secondary changes are needed to maintain consistency between software artefacts? Although many approaches have been proposed, automated change propagation is still a significant technical challenge in software maintenance and evolution. Our objective is to provide tool support for assisting designers in propagating changes during the process of maintaining and evolving models. We propose a novel, agent-oriented, approach that works by repairing violations of desired consistency rules in a design model. Such consistency constraints are specified using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) metamodel, which form the key inputs to our change propagation framework. The underlying change propagation mechanism of our framework is based on the well-known Belief-Desire-Intention (BDI) agent architecture. Our approach represents change options for repairing inconsistencies using event-triggered plans, as is done in BDI agent platforms. This naturally reflects the cascading nature of change propagation, where each change (primary or secondary) can require further changes to be made. We also propose a new method for generating repair plans from OCL consistency constraints. Furthermore, a given inconsistency will typically have a number of repair plans that could be used to restore consistency, and we propose a mechanism for semi-automatically selecting between alternative repair plans. This mechanism, which is based on a notion of cost, takes into account cascades (where fixing the violation of a constraint breaks another constraint), and synergies between constraints (where fixing the violation of a constraint also fixes another violated constraint). Finally, we report on an evaluation of the approach, covering both effectiveness and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
9

Williams, Patrick Charles. "Political Leadership and Management of Civic Services in a Downturn Economy." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/1392.

Full text
Abstract:
Municipal leaders in the United States face difficult decisions when prioritizing nonmandated civic projects for funding, especially when operating budgets are restricted. This phenomenological study investigated municipal leaders' decision-making processes in a state in the southern United States, using a conceptual framework based on rational choice theory, bounded rationality, and group decision-making theory. It specifically explored personal and organizational decision-making processes related to the prioritization and funding of nonmandated civic projects via in-depth interviews with a convenience sample of 15 municipal leaders. Thematic analysis identified expert opinions, the time and cost to complete a project, the perceived value relative to expense, and the availability of additional funding sources as themes important to understanding participants' decision-making processes. Organizational factors that were important in these decisions included the need for clearly defined responsibilities and consistency in funding decisions. No clearly defined organizational processes were in place in any of the participants' municipalities, and the participants noted that areas such as infrastructure improvements, traffic congestion, community involvement, and formal processes in their municipalities were in need of improvement. Positive social change can flow from greater governmental transparency through municipal decision makers' adoption of systematic decision-making systems and processes. Positive social change can also result from greater inclusiveness through increased public outreach efforts. Results add to the research base by contributing to a better theoretical understanding of organizational decision-making processes in the municipal context.
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Heechan. "Essays on methodologies in contingent valuation and the sustainable management of common pool resources." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141240444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bartho, Andreas. "Creating and Maintaining Consistent Documents with Elucidative Development." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-208060.

Full text
Abstract:
Software systems usually consist of multiple artefacts, such as requirements, class diagrams, or source code. Documents, such as specifications and documentation, can also be viewed as artefacts. In practice, however, writing and updating documents is often neglected because it is expensive and brings no immediate benefit. Consequently, documents are often outdated and communicate wrong information about the software. The price is paid later when a software system must be maintained and much implicit knowledge that existed at the time of the original development has been lost. A simple way to keep documents up to date is generation. However, not all documents can be fully generated. Usually, at least some content must be written by a human author. This handwritten content is lost if the documents must be regenerated. In this thesis, Elucidative Development is introduced. It is an approach to create documents by partial generation. Partial generation means that some parts of the document are generated whereas others are handwritten. Elucidative Development retains manually written content when the document is regenerated. An integral part of Elucidative Development is a guidance system, which informs the author about changes in the generated content and helps him update the handwritten content
Softwaresysteme setzen sich üblicherweise aus vielen verschiedenen Artefakten zusammen, zum Beispiel Anforderungen, Klassendiagrammen oder Quellcode. Dokumente, wie zum Beispiel Spezifikationen oder Dokumentation, können auch als Artefakte betrachtet werden. In der Praxis wird aber das Schreiben und Aktualisieren von Dokumenten oft vernachlässigt, weil es zum einen teuer ist und zum anderen keinen unmittelbaren Vorteil bringt. Dokumente sind darum häufig veraltet und vermitteln falsche Informationen über die Software. Den Preis muss man später zahlen, wenn die Software gepflegt wird, weil viel von dem impliziten Wissen, das zur Zeit der Entwicklung existierte, verloren ist. Eine einfache Möglichkeit, Dokumente aktuell zu halten, ist Generierung. Allerdings können nicht alle Dokumente generiert werden. Meist muss wenigstens ein Teil von einem Menschen geschrieben werden. Dieser handgeschriebene Inhalt geht verloren, wenn das Dokument neu generiert werden muss. In dieser Arbeit wird das Elucidative Development vorgestellt. Dabei handelt es sich um einen Ansatz zur Dokumenterzeugung mittels partieller Generierung. Das bedeutet, dass Teile eines Dokuments generiert werden und der Rest von Hand ergänzt wird. Beim Elucidative Development bleibt der handgeschriebene Inhalt bestehen, wenn das restliche Dokument neu generiert wird. Ein integraler Bestandteil von Elucidative Development ist darüber hinaus ein Hilfesystem, das den Autor über Änderungen an generiertem Inhalt informiert und ihm hilft, den handgeschriebenen Inhalt zu aktualisieren
APA, Harvard, Vancouver, ISO, and other styles
12

Webb, Michael John. "Estimating Uncertainty Attributable to Inconsistent Pairwise Comparisons in the Analytic Hierarchy Process (AHP)." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10751947.

Full text
Abstract:

This praxis explores a new approach to the problem of estimating the uncertainty attributable to inconsistent pairwise comparison judgments in the Analytic Hierarchy Process (AHP), a prominent decision-making methodology used in numerous fields, including systems engineering and engineering management. Based on insights from measurement theory and established error propagation equations, the work develops techniques to estimate the uncertainty of aggregated priorities for decision alternatives based on measures of inconsistency for component pairwise comparison matrices. This research develops two formulations for estimating the error: the first, more computationally intensive and accurate, uses detailed calculations of parameter errors to estimate the aggregated uncertainty, while the second, significantly simpler, uses an estimate of mean relative error (MRE) for each pairwise comparison matrix to estimate the aggregated error. This paper describes the derivation of both formulations for the linear weighted sum method of priority aggregation in AHP and uses Monte Carlo simulation to test their estimation accuracies for diverse problem structures and parameter values. The work focuses on the two most commonly used methods of deriving priority weights in AHP: the eigenvector method (EVM) and the geometric mean method (GMM). However, the approach of estimating the propagation of measurement errors can be readily applied to other hierarchical decision support methodologies that use pairwise comparison matrices. The developed techniques provide analysts the ability to easily assess decision model uncertainties attributable to comparative judgment inconsistencies without recourse to more complex optimization routines or simulation experiments described previously in the professional literature.

APA, Harvard, Vancouver, ISO, and other styles
13

Rantsoudis, Christos. "Bases de connaissance et actions de mise à jour préférées : à la recherche de consistance au travers des programmes de la logique dynamique." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30286.

Full text
Abstract:
Dans la littérature sur les bases de données, il a été proposé d'utiliser des contraintes d'intégrité actives afin de restaurer l'intégrité de la base. Ces contraintes d'intégrité actives consistent en une contrainte classique augmentée d'un ensemble d'actions de mise à jour préférées qui peuvent être déclenchées quand la contrainte est violée. Dans la première partie de cette thèse, nous passons en revue les principales stratégies de réparation qui ont été proposées dans la littérature et proposons une formalisation par des programmes de la Logique Dynamique. L'outil principal que nous employons dans notre recherche est la logique DL-PA, une variante de PDL récemment introduite. Nous explorons ensuite une nouvelle façon dynamique de réparer les bases de données et comparons sa complexité calculatoire et ses propriétés générales aux approches classiques. Dans la seconde partie de la thèse nous abandonnons le cadre propositionnel et adaptons les idées susmentionnées à des langages d'un niveau supérieur. Plus précisément, nous nous intéressons aux Logiques de Description, et étudions des extensions des axiomes d'une TBox par des actions de mise à jour donnant les manières préférées par lesquelles une ABox doit être réparée dans le cas d'une inconsistance avec les axiomes de la TBox. L'extension des axiomes d'une TBox avec de telles actions de mise à jour constitue une nouvelle TBox, qui est active. Nous nous intéressons à la manière de réparer une ABox en rapport avec une telle TBox active, du point de vue syntaxique ainsi que du point de vue sémantique. Étant donnée une ABox initiale, l'approche syntaxique nous permet de construire un nouvel ensemble d'ABox dans lequel nous identifions les réparations les mieux adaptées. D'autre part, pour l'approche sémantique, nous faisons de nouveau appel au cadre de la logique dynamique et considérons les actions de mise à jour, les axiomes d'inclusion actives et les réparations comme des programmes. Étant donné une TBox active aT , ce cadre nous permet de vérifier (1) si un ensemble d'actions de mise à jour est capable de réparer une ABox en accord avec les axiomes actifs d'aT en effectuant une interprétation locale des actions de mise à jour et (2) si une ABox A' est la réparation d'une ABox donnée A sous les axiomes actifs d'aT moyennant un nombre borné de calculs, en utilisant une interprétation globale des actions de mise à jour. Après une discussion des avantages de chaque approche nous concluons en proposant une intégration des approches syntaxiques et sémantiques dans une approche cohésive
In the database literature it has been proposed to resort to active integrity constraints in order to restore database integrity. Such active integrity constraints consist of a classical constraint together with a set of preferred update actions that can be triggered when the constraint is violated. In the first part of this thesis, we review the main repairing routes that have been proposed in the literature and capture them by means of Dynamic Logic programs. The main tool we employ for our investigations is the recently introduced logic DL-PA, which constitutes a variant of PDL. We then go on to explore a new, dynamic kind of database repairing whose computational complexity and general properties are compared to the previous established approaches. In the second part of the thesis we leave the propositional setting and pursue to adapt the aforementioned ideas to higher level languages. More specifically, we venture into Description Logics and investigate extensions of TBox axioms by update actions that denote the preferred ways an ABox should be repaired in case of inconsistency with the axioms of the TBox. The extension of the TBox axioms with these update actions constitute new, active TBoxes. We tackle the problem of repairing an ABox with respect to such an active TBox both from a syntactic as well as a semantic perspective. Given an initial ABox, the syntactic approach allows us to construct a set of new ABoxes out of which we then identify the most suitable repairs. On the other hand, for the semantic approach we once again resort to a dynamic logic framework and view update actions, active inclusion axioms and repairs as programs. Given an active TBox aT , the framework allows to check (1) whether a set of update actions is able to repair an ABox according to the active axioms of aT by interpreting the update actions locally and (2) whether an ABox A' is the repair of a given ABox A under the active axioms of aT using a bounded number of computations by interpreting the update actions globally. After discussing the strong points of each direction, we conclude by combining the syntactic and semantic investigations into a cohesive approach
APA, Harvard, Vancouver, ISO, and other styles
14

Pham, Phuong Thao. "Architecture à base de situations pour le traitement des quiproquos dans l'exécution adaptative d'applications interactives." Thesis, La Rochelle, 2013. http://www.theses.fr/2013LAROS415/document.

Full text
Abstract:
Nos travaux s’inscrivent dans le cadre de la définition d’architectures pour la conception des applications informatiques se basant sur l’interactivité avec l’utilisateur. Nous nous plaçons dans un contexte de système médiateur, dans lequel les interactions sont traitées par le système afin de maitriser au mieux l’exécution de l’application interactive. Un point-clé de notre approche est l’hypothèse que le concepteur conçoit son application interactive en fonction d’un présupposé qu’il a sur l’utilisateur (compétence, comportement...). Afin de maintenir la cohérence de l’exécution vis-à-vis du comportement de l’utilisateur pour l’activité en cours, le mécanisme d’adaptation doit alors prendre en compte la logique perçue et interprétée de l’utilisateur. Le principe d’exécution adaptative permet donc à un système interactif d’ajuster sa logique d’exécution en fonction de l’état, des comportements, des réactions et des capacités de l’utilisateur. Ainsi, le point de départ de l’exécution adaptative est la définition des propriétés caractérisant l’état de l’utilisateur et/ou de son environnement, et l’observation ou la capture de cet état, qui permettra par la suite de prendre une décision sur la poursuite du déroulement du scénario. Cependant, cette décision d’adaptation peut être influencée ou entravée par la distance entre l’état observé et l’état réel de l’utilisateur, ainsi que par la distance entre l’état observé et l’état prédit (espéré) par le système. Les principaux obstacles à l’exécution adaptative dans un système interactif sont de 3 types : les ambiguïtés, les incohérences et les quiproquos. Ils peuvent survenir à chaque fois qu’un ensemble d’acteurs du système considéré interagissent, partagent des connaissances globales et gèrent leurs connaissances locales. Un quiproquo se produit lorsque deux acteurs ont des connaissances incohérentes dans leurs visions locales et les utilisent pendant leurs interactions ; cela peut entraîner une déviation de ces interactions. L’ambiguïté causant possiblement des mauvaises perceptions est une des origines de quiproquo. Les ambiguïtés et les quiproquos sont des entraves pouvant entrainer des conséquences graves pour le système, tel que la déviance du scénario, la propagation des quiproquos, l’interruption des interactions, la perte de motivation des utilisateurs... Ils diminuent à la fois la qualité de l’adaptation et la pertinence de l’interaction. La question principale à laquelle veulent répondre nos travaux est : comment peut-on gérer les quiproquos entre les acteurs du système lors de l’exécution, afin d’améliorer l’adaptativité dans les applications interactives ? Le principe de notre solution est de proposer un gabarit de conception et d’organisation des interactions ainsi qu’un gabarit de mécanisme de gestion de cohérence, que les concepteurs d’une application interactive pourront reprendre comme support pour développer leurs propres algorithmes de détection ou de correction. Ce modèle d’architecture doit être générique et réutilisable, les mécanismes doivent être transparents et préserver les propriétés importantes des systèmes interactifs. Pour atteindre cet objectif, notre recherche s’est divisée en trois points : proposer un cadre méthodologique à base de la notion de « situation » pour la conception des applications interactives, pour confiner les interactions et suivre les parcours d’actions de chaque acteur, afin de contrôler l’utilisation des ressources et assurer la cohérence des visions locales ; proposer une architecture robuste à base d’agents avec la surcharge des composants spécifiques en tant qu’infrastructure des systèmes interactifs adaptatifs ; enfin, transférer des techniques du domaine de la sûreté de fonctionnement et de la tolérance aux fautes, vers le domaine de l’interactivité et l’adaptativité pour traiter les quiproquos
Our works focus on defining an architectural model for interactivity-based computer applications. The research context is placed in the mediator systems where the interactions are treated by the system itself, and in the scenarized applications where its execution is considered as a scenario. This aims to manage at best the interactive application execution. The observation and adaptation are key points of our approach where the designer develops his interactive application according to the presuppositions about users (behaviour, skills...). To maintain an execution consistence towards user’s behaviour in current activities, the adaptation mechanism has to take into account the perceived and interpreted user’s logic. That allows the system to adjust its execution logic to user’s state, behaviour, reactions and capacities. Hence, the starting point of adaptive execution is to define a set of proprieties characterising user’s state and his environment of which the observation permits thereafter to make decisions about the future scenario continuity. However, this decision can be influenced or hampered by the difference distance between the observed state and the real state of user, also the distance between the observed state and the expected one by the system. The principal obstacles against the adaptation and interactions are : the ambiguity, the inconsistency, and the misunderstanding. They can occur when the participant actors interact, share global data, and manage the local knowledge contained in their local visions at the same time. A misunderstanding in interaction arises during actors’ interactions using the inconsistent data in their local visions that can impact badly on interaction. The ambiguity causing possibly the wrong perceptions is one of the principal misunderstanding origines. Theses obstacles lead to serious consequences for the system and application such as scenario deviation, misunderstanding propagation, interaction interruption, user’s motivation lost...They decrease the adaptation quality and interaction pertinence. Hence, the principal question of this thesis is : how can we handle the misunderstanding in interactions between the actors during system execution in order to improve adaptability in the interactive applications ? Our solution principle is to propose a model for interaction designing and organizing, together with a model for consistency handling mechanisms that application designers can employ as a necessary support to install his own detection or correction algorithms. These models have to be generic, reusable to be applied in different types of application. The consistency managements have to be transparent to users, and preserve important properties of interactive systems. To attain this objective, our works follow three major points : propose a situation-based methodological model for interactive application designing to confine a sequence of interactions into a situation with the constraints of context and resource utilisation. This structuration into situations propose a robust system architecture with additional specific components that ensure misunderstanding in interaction detection and management. Integrate the adaptive treatment mechanisms to the dynamic system’s execution through the proposed situation-based architectural model. They are inspired and adapted from fault-tolerance techniques in dependability domain
APA, Harvard, Vancouver, ISO, and other styles
15

Cass, Aaron G. "Software design guidance by process-scoped inconsistency management." 2005. https://scholarworks.umass.edu/dissertations/AAI3163653.

Full text
Abstract:
Software design is the complex activity of producing a model of a system that gives assurance both that the system can be built and that the built system will satisfy the requirements placed on the system. The model must therefore, at its completion, be both internally consistent and consistent with the requirements model. In this work, we investigate technologies for helping the (novice) designer to produce a high-quality design more expeditiously by helping with the management of inconsistency. We propose and evaluate an approach for providing inconsistency feedback to designers. This work combines process programming and inconsistency management. It employs a process program as a mechanism for scoping the application of, and responses to violations of, consistency rules. The approach promises to give novices precise and timely context-specific feedback. To evaluate the approach, we have undertaken a factored experiment based on the hypothesis that our approach will help novice designs to produce designs quickly and with high quality. The experimental results support the hypothesis that process guidance has positive effects on design speed and design quality.
APA, Harvard, Vancouver, ISO, and other styles
16

Jahnke, Jens H. [Verfasser]. "Management of uncertainty and inconsistency in database reengineering processes / Jens H. Jahnke." 1999. http://d-nb.info/961979909/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fu, Limin. "Uncovering the two faces: drivers, contexts, and outcomes of corporate social inconsistency." Thesis, 2017. http://hdl.handle.net/2440/106497.

Full text
Abstract:
This thesis examines firms’ internal inconsistencies with regard to corporate social responsibility (CSR), or more specifically the within-firm variability in corporate environmental, social, and governance (ESG) practices. The data for this thesis were collected from multiple databases. The empirical results were drawn from a panel data of 863 firms for the period of 2008 to 2012. This thesis follows a PhD by publication format and comprises three interrelated papers that are contained in Chapters 2, 3, and 4, respectively. The first paper (Chapter 2) is a theoretical exploration of why firms are consistent or inconsistent in their social practices. This study conceptualizes within-firm corporate social inconsistency (CSI) essentially as tradeoffs among stakeholders. Drawing predominantly on instrumental stakeholder theory and resource dependence theory, this paper proposes a conceptual framework to explain why firms are consistent or inconsistent in their ESG practices. The central argument of this paper is that the balance of stakeholder pressures and organizational resource endowments jointly explain CSI, as well as other closely related strategic postures, such as legal compliance, consistent CSR, and consistent corporate social irresponsibility (CSiR). The second paper (Chapter 3) is an empirical investigation of research and development (R&D) as a specific type of resource that might affect firms’ consistency or inconsistency in ESG practices. In addition, this study examines the contextual contingency impact of market openness on the association between R&D and CSI. Drawing on evolutionary economics, this study proposes that R&D is positively related to CSI because the complementarity between R&D and CSI can create important synergies between a firm’s market and nonmarket strategies. This paper also hypothesizes that high market openness positively moderates the relationship between R&D and CSI because high selection pressure from open markets reinforces the strategic-instrumental necessity of bundling R&D and CSI. The third paper (Chapter 4) examines the effects of CSI on corporate risk. Drawing on instrumental stakeholder theory and the resource-based theory (RBT) of the firm, this study hypothesizes a U-shaped relationship based on the latent benefits of CSI and its exponential costs. The results suggest that CSI is inversely related to corporate risk at low and moderate levels. Beyond that point, excessive CSI enhances risk. However, the risk-enhancing characteristics of CSI can largely be avoided by pairing CSI with innovation. The findings of the three papers lead to the overarching conclusion that CSI is essentially a resource management strategy in firms’ strategic tradeoffs. It can enhance the effectiveness and efficiency of resources as a response to external environments. R&D and innovation merit special attention in this resource management process because of their synergy with ESG practices. CSI in moderation can be a beneficial nonmarket strategy that reduces corporate risk. However, excessive CSI is also shown to be disadvantageous. The findings of the thesis make an important theoretical contribution to the literatures on CSR and strategic management. Practical implications can also be drawn from the thesis for managers, investors, and policy makers.
Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, Business School, 2017
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Tao-Chung, and 劉道忠. "The Inconsistency between Investors'' Risk Attitude and Their Investment Portfolios - An Empirical Study on the Case Bank''s Customers from Wealth Management." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/11984591479067298897.

Full text
Abstract:
碩士
國立中興大學
高階經理人碩士在職專班
103
The interest rate spreads have been narrowing, which is the biggest difference between past and present financial environment. Since the United States implemented quantitative easing (QE), other countries, including Japan, China, Australia, New Zealand, the United Kingdom, and the European Union, have cut their interest rates and launched stimulus programs to prevent recession and deflation. Under the background, neither the banks nor the general public can simply rely on interest income to accumulate their income or wealth. As a result, Wealth Management has become a new trend. Previous Studies of investment behavior put emphasis on the analysis between customers’ risk attributes and their product suitability. According to the exception handling mechanism which the Financial Supervisory Commission (FSC) established in the “Wealth Management Procedure”, customer should sign a statement if he insists on investing a financial product or portfolio whose risk ranking is higher than his. Based on actual situation, banks have rights to refuse a subscription. A 2007 study discovered that there was about a quarter of investors whose risk attributes did not match that of the product they bought. After the bankruptcy of Lehman Brothers in 2008, with more and more structured notes complaint cases popping up, the authority has revised the “Wealth Management Procedure” regulation: to ensure the implementation of the confirmation process of the financial product suitability, banks must refuse a subscription when the products’ risk ranking is higher than the customers’. Supposedly, further research in this issue is no longer necessary, since the authority has revised the regulation. However, based on actual management experiences, researcher finds that such study is still needed. Therefore, the research will further discuss the inconsistent reasons between customers’ risk attributes and the risk ranking of financial products. The study uses Logistic Regression to analyze the questionnaire results. Case study of specific bank relationship managers and their customers is also used in this paper. The results show that the bank relationship managers’ characteristics, such as education background and whether graduated from Finance related departments, and also the customers’ characteristics, including education background, monthly income, family yearly income, and total asset, will make the customers’ risk attributes unmatched with the products’ risk ranking. This study aimed to provide pre-alerts to high risk relationship managers and Wealth Management customers for bank management, to reduce mis-selling, which can enhance customers’ satisfaction and create a win-win-win situation for customers, relationship managers, and banks, and to provide advices to the authority, banks, relationship managers, customers, and future researchers.
APA, Harvard, Vancouver, ISO, and other styles
19

"A Pairwise Comparison Matrix Framework for Large-Scale Decision Making." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.17720.

Full text
Abstract:
abstract: A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.
Dissertation/Thesis
Ph.D. Industrial Engineering 2013
APA, Harvard, Vancouver, ISO, and other styles
20

Bartho, Andreas. "Creating and Maintaining Consistent Documents with Elucidative Development." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A29696.

Full text
Abstract:
Software systems usually consist of multiple artefacts, such as requirements, class diagrams, or source code. Documents, such as specifications and documentation, can also be viewed as artefacts. In practice, however, writing and updating documents is often neglected because it is expensive and brings no immediate benefit. Consequently, documents are often outdated and communicate wrong information about the software. The price is paid later when a software system must be maintained and much implicit knowledge that existed at the time of the original development has been lost. A simple way to keep documents up to date is generation. However, not all documents can be fully generated. Usually, at least some content must be written by a human author. This handwritten content is lost if the documents must be regenerated. In this thesis, Elucidative Development is introduced. It is an approach to create documents by partial generation. Partial generation means that some parts of the document are generated whereas others are handwritten. Elucidative Development retains manually written content when the document is regenerated. An integral part of Elucidative Development is a guidance system, which informs the author about changes in the generated content and helps him update the handwritten content.:1 Introduction 1.1 Contributions 1.2 Scope of the Thesis 1.3 Organisation 2 Problem Analysis and Solution Outline 2.1 Redundancy and Inconsistency 2.2 Improving Consistency with Partial Generation 2.3 Conclusion 3 Background 3.1 Grammar-Based Modularisation 3.2 Model-Driven Software Development 3.3 Round-Trip Engineering 3.4 Conclusion 4 Elucidative Development 4.1 General Idea and Running Example 4.2 Requirements of Elucidative Development 4.3 Structure and Basic Concepts of Elucidative Documents 4.4 Presentation Layer 4.5 Guidance 4.6 Conclusion 5 Model-Driven Elucidative Development 5.1 General Idea and Running Example 5.2 Requirements of Model-Driven Elucidative Development 5.3 Structure and Basic Concepts of Elucidative Documents in Model-Driven Elucidative Development 5.4 Guidance 5.5 Conclusion 6 Extensions of Elucidative Development 6.1 Validating XML-based Elucidative Documents 6.2 Backpropagation-Based Round-Trip Engineering for Computed Text Document Fragments 6.3 Conclusion 7 Tool Support for an Elucidative Development Environment 7.1 Managing Active References 7.2 Inserting Computed Document Fragments 7.3 Caching the Computed Document Fragments 7.4 Elucidative Document Validation with Schemas 7.5 Conclusion 8 Related Work 8.1 Related Documentation Approaches 8.2 Consistency Approaches 8.3 Compound Documents 8.4 Conclusion 9 Evaluation 9.1 Creating and Maintaining the Cool Component Specification 9.2 Creating and Maintaining the UML Specification 9.3 Feasibility Studies 9.4 Conclusion 10 Conclusion
Softwaresysteme setzen sich üblicherweise aus vielen verschiedenen Artefakten zusammen, zum Beispiel Anforderungen, Klassendiagrammen oder Quellcode. Dokumente, wie zum Beispiel Spezifikationen oder Dokumentation, können auch als Artefakte betrachtet werden. In der Praxis wird aber das Schreiben und Aktualisieren von Dokumenten oft vernachlässigt, weil es zum einen teuer ist und zum anderen keinen unmittelbaren Vorteil bringt. Dokumente sind darum häufig veraltet und vermitteln falsche Informationen über die Software. Den Preis muss man später zahlen, wenn die Software gepflegt wird, weil viel von dem impliziten Wissen, das zur Zeit der Entwicklung existierte, verloren ist. Eine einfache Möglichkeit, Dokumente aktuell zu halten, ist Generierung. Allerdings können nicht alle Dokumente generiert werden. Meist muss wenigstens ein Teil von einem Menschen geschrieben werden. Dieser handgeschriebene Inhalt geht verloren, wenn das Dokument neu generiert werden muss. In dieser Arbeit wird das Elucidative Development vorgestellt. Dabei handelt es sich um einen Ansatz zur Dokumenterzeugung mittels partieller Generierung. Das bedeutet, dass Teile eines Dokuments generiert werden und der Rest von Hand ergänzt wird. Beim Elucidative Development bleibt der handgeschriebene Inhalt bestehen, wenn das restliche Dokument neu generiert wird. Ein integraler Bestandteil von Elucidative Development ist darüber hinaus ein Hilfesystem, das den Autor über Änderungen an generiertem Inhalt informiert und ihm hilft, den handgeschriebenen Inhalt zu aktualisieren.:1 Introduction 1.1 Contributions 1.2 Scope of the Thesis 1.3 Organisation 2 Problem Analysis and Solution Outline 2.1 Redundancy and Inconsistency 2.2 Improving Consistency with Partial Generation 2.3 Conclusion 3 Background 3.1 Grammar-Based Modularisation 3.2 Model-Driven Software Development 3.3 Round-Trip Engineering 3.4 Conclusion 4 Elucidative Development 4.1 General Idea and Running Example 4.2 Requirements of Elucidative Development 4.3 Structure and Basic Concepts of Elucidative Documents 4.4 Presentation Layer 4.5 Guidance 4.6 Conclusion 5 Model-Driven Elucidative Development 5.1 General Idea and Running Example 5.2 Requirements of Model-Driven Elucidative Development 5.3 Structure and Basic Concepts of Elucidative Documents in Model-Driven Elucidative Development 5.4 Guidance 5.5 Conclusion 6 Extensions of Elucidative Development 6.1 Validating XML-based Elucidative Documents 6.2 Backpropagation-Based Round-Trip Engineering for Computed Text Document Fragments 6.3 Conclusion 7 Tool Support for an Elucidative Development Environment 7.1 Managing Active References 7.2 Inserting Computed Document Fragments 7.3 Caching the Computed Document Fragments 7.4 Elucidative Document Validation with Schemas 7.5 Conclusion 8 Related Work 8.1 Related Documentation Approaches 8.2 Consistency Approaches 8.3 Compound Documents 8.4 Conclusion 9 Evaluation 9.1 Creating and Maintaining the Cool Component Specification 9.2 Creating and Maintaining the UML Specification 9.3 Feasibility Studies 9.4 Conclusion 10 Conclusion
APA, Harvard, Vancouver, ISO, and other styles
21

Matshiga, Zulu Elijah. "Possible tax evasion due to the ineffective and inconsistent implementation of internal controls within the supply-chain management processes." Diss., 2018. http://hdl.handle.net/10500/24872.

Full text
Abstract:
This study investigated and examined the effectiveness and implementation of the existing internal controls designed specifically for exempted micro-enterprises (EMEs) contracting with the South African Social Security Agency (SASSA), in order to minimise the risk of possible tax evasion within the supply-chain management (SCM) processes. The research was completed by conducting a document review and face-to-face interviews with SASSA‟s SCM practitioners, risk manager, fraud and corruption manager, internal-control manager and internal auditor in order to identify risks of possible tax evasion within the SCM processes. It was concluded that there is a risk of possible tax evasion within the SCM processes due to the ineffectiveness and inconsistent implementation of internal controls designed for EMEs contracting with SASSA. This risk could be minimised by incorporating possible anti-tax-evasion procedures in the risk-assessment process, and ultimately in SASSA‟s broader fraud and corruption strategies. Such procedures should then help minimise funds being lost to the fiscus due to tax evasion in the SCM processes.
Taxation
M. Phil. (Accounting Sciences)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography