Dissertations / Theses on the topic 'Consistency constraints'

To see the other types of publications on this topic, follow the link: Consistency constraints.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Consistency constraints.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nightingale, Peter. "Consistency and the quantified constraint satisfaction problem /." St Andrews, 2007. http://hdl.handle.net/10023/759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mäs, Stephan. "On the Consistency of Spatial Semantic Integrity Constraints." Neubiberg Universitätsbibliothek der Universität der Bundeswehr, 2010. http://d-nb.info/1000831663/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wieweg, William. "Towards Arc Consistency in PLAS." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232081.

Full text
Abstract:
The Planning And Scheduling (PLAS) module of ICE (Intelligent Control Environment) is responsible for planning and scheduling a large fleet of vehicles. This process involves the creation of tasks to be executed by the vehicles. Using this information, PLAS decides which vehicles should execute which tasks, which are modelled as constraint satisfaction problems. Solving the constraint satisfaction problems is slow. To improve efficiency, a number of different techniques exist. One of these is arc consistency, that entails taking a constraint satisfaction problem and evaluating its variables pairwise by applying the constraints among them. Using arc consistency, we can discern the candidate solutions to constraint satisfaction problems faster than doing a pure search. In addition, arc consistency allows us to detect and act early on inconsistencies in constraint satisfaction problems. The work in this master thesis includes the implementation of a constraint solver for symbolic constraints, containing the arc consistency algorithm AC3. Furthermore, it encompasses the implementation of a constraint satisfaction problem generator, based on the Erdős-Rényi graph model, inspired by the quasigroup completion problem with holes, that allows the evaluation of the constraint solver on large-sized problems. Using the constraint satisfaction problem generator, a set of experiments were performed to evaluate the constraint solver. Furthermore, a set of complementary scenarios using manually created constraint satisfaction problems were performed to augment the experiments. The results show that the performance scales up well.
Schemaläggningsmodulen PLAS som är en del av ICE (Intelligent Control Environment) är ansvarig för planering och schemaläggning av stora mängder fordonsflottor. Denna process involverar skapandet av uppgifter som behöver utföras av fordonen. Utifrån denna information bestämmer PLAS vilka fordon som ska utföra vilka uppgifter, vilket är modellerat som villkorsuppfyllelseproblem. Att lösa villkorsuppfyllelseproblem är långsamt. För att förbättra prestandan, så finns det en mängd olika tekniker. En av dessa är bågkonsekvens, vilket involverar att betrakta ett villkorsuppfyllelseproblem och utvärdera dess variabler parvis genom att tillämpa villkoren mellan dem. Med hjälp av bågkonsekvens kan vi utröna kandidatlösningar för villkorsuppfyllelseproblemen snabbare, jämfört med ren sökning. Vidare, bågkonsenvens möjliggör upptäckande och bearbetning av inkonsekvenser i villkorsuppfyllelseproblem. Arbetet i denna masteruppsats omfattar genomförandet av en villkorslösare för symboliska villkor, innehållandes bågkonsekvensalgoritmen AC3. Vidare, så innefattar det genomförandet av en villkorsuppfyllelseproblemgenerator, baserad på grafmodellen Erdős-Rényi, inspirerad av kvasigruppkompletteringsproblem med hål, villket möjliggör utvärdering av villkorslösaren på stora problem. Med hjälp av villkorsuppfyllelseproblemgeneratorn så utfördes en mängd experiment för att utvärdera villkorslösaren. Vidare så kompletterades experimenten av en mängd scenarion utförda på manuellt skapade villkorsuppfyllelseproblem. Resultaten visar att prestandan skalar upp bra.
APA, Harvard, Vancouver, ISO, and other styles
4

Asplund, Mikael. "Restoring Consistency after Network Partitions." Licentiate thesis, Linköping : Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mück, Alexander. "The standard model in 5D theoretical consistency and experimental constraints /." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974408107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mallur, Vikram. "A Model for Managing Data Integrity." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20233.

Full text
Abstract:
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
APA, Harvard, Vancouver, ISO, and other styles
7

Mäs, Stephan [Verfasser], Wolfgang [Akademischer Betreuer] Reinhardt, and Max [Akademischer Betreuer] Egenhofer. "On the Consistency of Spatial Semantic Integrity Constraints / Stephan Mäs. Wolfgang Reinhardt. Max Egenhofer. Universität der Bundeswehr München, Fakultät für Bauingenieur- und Vermessungswesen." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr, 2010. http://d-nb.info/1000831663/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chapovalova, Valentina. "Consistency of Constraint Reifications by Reformulation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-229577.

Full text
Abstract:
Many combinatorial problems can be formulated via constraints, i.e., relations between variables’ values to be satisfied in order for them to count as a solution for the problem. A lot of these combinatorial relations can be applied to an arbitrary number of arguments, these constraints are called global. Most global constraints (at least 82% in the Global Constraint Catalogue [1]) can be reformulated as a conjunction of total functions together with a constraint which can be directly reified. The reifications are useful in modelling several kinds of combinatorial problems, e.g., when there is a known number of satisfied constraints, but the exact set of satisfied constraints is a priori unknown. In this thesis, we apply different methods in order to determine the consistency level (domain or bounds consistency) of the known reifications, in order to estimate whether the reifications would prune effectively  if they were implemented.
APA, Harvard, Vancouver, ISO, and other styles
9

Francisco, Rodriguez Maria Andreina. "Consistency of Constraint Networks Induced by Automaton-Based Constraint Specifications." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156441.

Full text
Abstract:
In this work we discuss the consistency of constraints for which the set of solutions can be recognised by a deterministic finite automaton. Such an automaton induces a decomposition of the constraint into a conjunction of constraints. Since the level of filtering for the conjunction of constraints is not known, at any point during search there might be only one possible solution but, since all impossible values might not have yet been removed, we could be wasting time looking at impossible combinations of values. The so far most general result is that if the constraint hypergraph of such a decomposition is Berge-acyclic, then the decomposition provides hyper-arc consistency, which means that the decomposition achieves the best possible filtering. We focus our work on constraint networks that have alpha-acyclic, centred-cyclic or sliding-cyclic hypergraph representations. For each of these kinds of constraints networks we show systematically the necessary conditions to achieve hyper-arc consistency.
APA, Harvard, Vancouver, ISO, and other styles
10

Nightingale, Peter William. "Consistency and the Quantified Constraint Satisfaction Problem." Thesis, University of St Andrews, 2007. http://hdl.handle.net/10023/759.

Full text
Abstract:
Constraint satisfaction is a very well studied and fundamental artificial intelligence technique. Various forms of knowledge can be represented with constraints, and reasoning techniques from disparate fields can be encapsulated within constraint reasoning algorithms. However, problems involving uncertainty, or which have an adversarial nature (for example, games), are difficult to express and solve in the classical constraint satisfaction problem. This thesis is concerned with an extension to the classical problem: the Quantified Constraint Satisfaction Problem (QCSP). QCSP has recently attracted interest. In QCSP, quantifiers are allowed, facilitating the expression of uncertainty. I examine whether QCSP is a useful formalism. This divides into two questions: whether QCSP can be solved efficiently; and whether realistic problems can be represented in QCSP. In attempting to answer these questions, the main contributions of this thesis are the following: - the definition of two new notions of consistency; - four new constraint propagation algorithms (with eight variants in total), along with empirical evaluations; - two novel schemes to implement the pure value rule, which is able to simplify QCSP instances; - a new optimization algorithm for QCSP; - the integration of these algorithms and techniques into a solver named Queso; - and the modelling of the Connect 4 game, and of faulty job shop scheduling, in QCSP. These are set in context by a thorough review of the QCSP literature.
APA, Harvard, Vancouver, ISO, and other styles
11

Gutiérrez, Faxas Patricia. "Distributed Constraint Optimization Related with Soft Arc Consistency." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/98395.

Full text
Abstract:
Los Problemas de Optimización con Restricciones Distribuidos (DCOP) son utilizados para modelar problemas de coordinación multi-agente. Los DCOPs se definen con un número finito de agentes, variables y funciones de coste. El objetivo es encontrar una asignación de todas las variables cuyo coste global sea mínimo. Para lograrlo, los agentes que manejan las variables han de intercambiar información sobre el coste de sus asignaciones hasta encontrar la solución óptima. Varios algoritmos distribuidos se han propuesto para encontrar soluciones óptimas en DCOPs. En el caso centralizado, se han desarrollado técnicas para acelerar la resolución de problemas de optimización con restricciones. En particular, técnicas de arco consistencia blanda, como AC, FDAC y EDAC, las cuales identifican valores inconsistentes que pueden ser eliminados. El objetivo de esta tesis es incluir técnicas de arco consistencia blanda en la resolución de DCOPs. Esta combinación obtiene mejoras sustanciales en el rendimiento. Las arco consistencias blandas son conceptualmente iguales en el caso centralizado y distribuido. Sin embargo, mantenerlas en el caso distribuido requiere un enfoque diferente. En centralizado, todos los elementos del problema están disponibles para el único agente que realiza la búsqueda, mientras que en el caso distribuido los agentes solo conocen algunas partes del problema y deben intercambiar información para lograr el nivel de consistencia deseado. En este proceso, las estructuras del problema se deben modificar de tal manera que la información parcial del problema global permanezca coherente en cada agente. En esta tesis presentamos tres contribuciones para la resolución de DCOPs. En primer lugar, hemos estudiado el algoritmo de búsqueda BnB-ADOPT y hemos podido mejorarlo de manera significativa. Probamos que algunos de los mensajes enviados por BnB-ADOPT son redundantes y pueden ser eliminados sin afectar la optimalidad y completitud del algoritmo. Además, cuando se trabaja con restricciones de aridad mayor que dos, algunos problemas aparecen en este algoritmo. Proponemos una forma simple de solucionarlos obteniendo una nueva versión para el caso n-ario. También, presentamos el nuevo algoritmo ADOPT(k), el cual generaliza los algoritmos ADOPT y BnB-ADOPT. ADOPT(k) realiza una estrategia de búsqueda similar a ADOPT, a BnB-ADOPT, o a un híbrido de ambos dependiendo del parámetro k. En segundo lugar, introducimos técnicas de arco consistencia blanda en DCOPs, utilizando BnB-ADOPT+ como algoritmo de resolución. Durante la búsqueda mantenemos los niveles de consistencia AC y FDAC, con la limitación que solo se propagan borrados incondicionales, logrando importantes beneficios en la comunicación y en esfuerzo de cómputo. Mantenemos FDAC en varios órdenes de las variables obteniendo reducciones en la comunicación. Además, proponemos DAC por propagación de token, una nueva forma de propagar borrados durante la búsqueda distribuida. Experimentalmente, esta estrategia ha demostrado ser competitiva comparada con FDAC. En tercer lugar, exploramos la inclusión de restricciones globales blandas en DCOPs. Pensamos que las restricciones globales mejoran la expresividad de DCOP. Proponemos tres formas de incluir restricciones globales blandas en DCOP y extendemos el algoritmo BnB-ADOPT+ para incorporarlas. Además, exploramos el impacto de mantener arco consistencia en problemas con restricciones globales blandas. Experimentalmente, medimos la eficiencia de los algoritmos propuestos en varios conjuntos de datos comúnmente usados en la comunidad de DCOP.
Distributed Constraint Optimization Problems (DCOPs) can be used for modeling many multi-agent coordination problems. DCOPs involve a finite number of agents, variables and cost functions. The goal is to find a complete variable assignment with minimum global cost. This is achieved among several agents handling the variables and exchanging information about their cost evaluation until an optimal solution is found. Recently, researchers have proposed several distributed algorithms to optimally solve DCOPs. In the centralized case, techniques have been developed to speed up constraint optimization solving. In particular, search can be improved by enforcing soft arc consistency, which identifies inconsistent values that can be removed from the problem. Some soft consistency levels proposed are AC, FDAC and EDAC. The goal of this thesis is to include soft arc consistency techniques in DCOP resolution. We show that this combination causes substantial improvements in performance. Soft arc consistencies are conceptually equal in the centralized and distributed cases. However, maintaining soft arc consistencies in the distributed case requires a different approach. While in the centralize case all problem elements are available in the single agent performing the search, in the distributed case agents only knows some part of the problem and they must exchange information to achieve the desired consistency level. In this process, the operations that modify the problem structures should be done in such a way that partial information of the global problem remains coherent on every agent. In this thesis we present three main contributions to optimal DCOP solving. First, we have studied and experimented with the complete solving algorithm BnB-ADOPT. As result of this work, we have improved it to a large extent. We show that some of BnB-ADOPT messages are redundant and can be removed without compromising optimality and termination. Also, when dealing with cost functions of arity higher than two, some issues appear in this algorithm. We propose a simple way to overcome them obtaining a new version for the n-ary case. In addition, we present the new algorithm ADOPT($k$), which generalizes the algoritms ADOPT and BnB-ADOPT. ADOPT ($k$) can perform a search strategy either like ADOPT, like BnB-ADOPT or like a hybrid of both depending on the $k$ parameter. Second, we have introduced soft arc consistency techniques in DCOPs, taking BnB-ADOPT$^+$ as our base solving algorithm. During the search process, we enforce the soft arc consistency levels AC and FDAC, under the limitation that only unconditional deletions are propagated, obtaining important benefits in communication and computation. We enforce FDAC considering multiple orderings of the variables obtaining savings in communication. Also, we propose DAC by token passing, a new way to propagate deletions during distributed search. Experimentally, this strategy turned out to be competitive when compared to FDAC. Third, we explore the inclusion of soft global constraints in DCOPs. We believe that soft global constraints enhance DCOP expressivity. We propose three different ways to include soft global constraints in DCOPs and extend the solving algorithm BnB-ADOPT$^+$ to support them. In addition, we explore the impact of soft arc consistency maintenance in problems with soft global constrains. Experimentally, we measure the efficiency of the proposed algorithms in several benchmarks commonly used in the DCOP community.
APA, Harvard, Vancouver, ISO, and other styles
12

Du, Li. "The viewpoint consistency constraint in model-based vision." Thesis, University of Reading, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

García, Odón Amaia. "Presupposition projection and entailment relations." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/94496.

Full text
Abstract:
In this dissertation, I deal with the problem of presupposition projection. I mostly focus on compound sentences composed of two clauses and conditional sentences in which the second clause carries a presupposition. The central claim is that the presupposition carried by the second clause projects by default, with the exception of cases in which the presupposition entails the first clause (or, in disjunctive sentences, the negation of the first clause). In the latter cases, the presupposition should not project, since it is logically stronger than the first clause (or its negation). Thus, in conjunctions, if the presupposition projected, the speaker’s assertion of the first clause would be uninformative. As for conditionals and disjunctions, if the presupposition projected, the speaker would show inconsistency in his/her beliefs by showing uncertainty about the truth value of the first clause (or its negation). I argue that, in conditionals, this uncertainty is conversationally implicated whereas, in disjunctions, it results from the context’s compatibility with the first disjunct. I maintain that, in cases where projection is blocked, the presupposition is conditionalized to the first clause (or its negation). I demonstrate that the conditionalization is motivated in a straightforward way by the pragmatic constraints on projection just described and that, contrary to what is defended by the so-called ‘satisfaction theory’, presupposition conditionalization is a phenomenon independent from local satisfaction.
En esta tesis, trato el problema de la proyección de presuposiciones. Me centro mayoritariamente en oraciones compuestas de dos cláusulas y en oraciones condicionales cuya segunda cláusula contiene una presuposición. El argumento central es que la presuposición contenida en la segunda cláusula proyecta por defecto, con la excepción de casos en los que la presuposición entraña la primera cláusula (o, en las oraciones disyuntivas, la negación de la primera cláusula). En estos últimos casos, la presuposición no debería proyectar, puesto que es lógicamente más fuerte que la primera cláusula (o su negación). Por tanto, en las oraciones conjuntivas, si la presuposición proyectase, la aseveración de la primera cláusula por parte del hablante no sería informativa. En cuanto a las oraciones condicionales y disyuntivas, si la presuposición projectase, el hablante mostraría inconsistencia en sus creencias al mostrar incertidumbre acerca del valor de verdad de la primera cláusula (o su negación). Sostengo que, en oraciones condicionales, esta incertidumbre es implicada conversacionalmente mientras que, en las oraciones disyuntivas, resulta de la compatibilidad contextual de la primera cláusula. Mantengo que, en casos en los que la proyección es bloqueada, la presuposición es condicionalizada a la primera cláusula (o su negación). Demuestro que la condicionalización es motivada de manera directa por las restricciones de tipo pragmático descritas arriba y que, contrariamente a la idea defendida por la así llamada ‘teoría de la satisfacción’, la condicionalización de la presuposición es un fenómeno independiente de la satisfacción local de la misma.
APA, Harvard, Vancouver, ISO, and other styles
14

Wahler, Michael. "Using patterns to develop consistent design constraints /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Battle, Steven A. "A multiple representation approach to constraint satisfaction." Thesis, University of the West of England, Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tran, Sy Nguyen. "Consistency techniques for test data generation." Université catholique de Louvain, 2005. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-05272005-173308/.

Full text
Abstract:
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
APA, Harvard, Vancouver, ISO, and other styles
17

Levine, Jonathan P. (Jonathan Philip). "A faster, more general nertwork consistency algorithm for constraint satisfaction problems /." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60043.

Full text
Abstract:
A general-purpose constraint satisfaction algorithm has been developed as part of the FLITE system for flight simulator tuning. It offers an improved time complexity of O($a sp{n}$), as compared with O($n sp{2}$(a + 1)$ sp{n}$) in (1). There are two steps to solving a constraint network. In the first step, all values which can never appear as part of any solution are removed from the domain of their corresponding variable. Smart starting points are used to jump to a branch of the tree which is more likely to hold the path being sought. Then backtracking is used to find all sets of consistent variable/value labelings which describe the solutions. Dynamic variable swapping is used to rearrange the order in which variables are bound, so as to reduce the size of the search tree. The algorithm works efficiently with sparse and fully connected constraint networks.
APA, Harvard, Vancouver, ISO, and other styles
18

Morgan, Elizabeth Alyson. "The foraging ecology of European shags (Phalacrocorax aristotelis) : flexibility, consistency and constraint." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/18443/.

Full text
Abstract:
Consistency and flexibility in foraging behaviour play vital roles in organisms’ responses to variable and changing environments. There is a need to understand the causes and consequences of this variation, and to establish how different intrinsic and extrinsic factors alter behaviour at individual, population and species levels. Here I examine individual and population-level variation in the three-dimensional foraging behaviour of a short-ranging benthic-feeding marine predator, the European shag Phalacrocorax aristotelis, focusing on birds breeding in the Farne Islands, UK. Across three years, I found that birds breeding on neighbouring islands were spatially segregated at sea but that this segregation was much stronger in years with higher productivity. I also found that birds displayed individual foraging site fidelity (IFSF), both within and across years, and that females with higher IFSF bred earlier and were in better condition than birds with low IFSF, although this effect was not seen in males. In addition to annual and spatial variation, the characteristics of birds’ foraging trips were also affected by time of day, state of tide and wind speed and direction, with females tending to respond more strongly than males. At a larger spatial scale, the foraging ranges of birds at different colonies around the UK showed a positive relationship with distance to the nearest coastline. These findings highlight the importance of considering variation in foraging behaviour at an appropriate scale and could help improve predictions of individual and population-level responses to future environmental changes.
APA, Harvard, Vancouver, ISO, and other styles
19

Woodward, Robert J. "Les cohérences fortes : où, quand, et combien." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS145/document.

Full text
Abstract:
Déterminer si un problème de satisfaction de contraintes (CSP) a une solution ou non est NP-complet. Les CSP sont résolus par inférence (c’est-à-dire, en appliquant un algorithme de cohérence), par énumération (c’est-à-dire en effectuant une recherche avec retour sur trace ou backtracking), ou, plus souvent, en intercalant les deux mécanismes. La propriété de cohérence la plus courante appliquée en cours du backtracking est la GAC (Generalized Arc Consistency). Au cours des dernières années, de nouveaux algorithmes pour appliquer des cohérences plus fortes que le GAC ont été proposés et montrés comme étant nécessaires pour résoudre les problèmes difficiles.Nous nous attaquons à la question de balancer d’une part le coût et, d’autre part, le pouvoir d’élagage des algorithmes de cohérence et posons cette question comme étant celle de déterminer où, quand et combien une cohérence doit-elle être appliquée en cours de backtracking. Pour répondre à la question « où », nous exploitons la structure topologique d'une instance du problème et focalisons la cohérence forte là où des structures cycliques apparaissent. Pour répondre à la question « quand », nous proposons une stratégie simple, réactive et efficace qui surveille la performance du backtracking puis déclenche une cohérence forte lorsque l’effort du retour sur trace devient alarmant. Enfin, pour la question du « combien », nous surveillons les mises à jour provoquées par la propagation des contraintes et interrompons le processus dès qu’il devient inactif ou coûteux même avant qu’il n’atteigne un point fixe. Les évaluations empiriques sur des problèmes de référence établissent l’efficacité de nos stratégies
Determining whether or not a Constraint Satisfaction Problem (CSP) has a solution is NP-complete. CSPs are solved by inference (i.e., enforcing consistency), conditioning (i.e., doing search), or, more commonly, by interleaving the two mechanisms. The most common consistency property enforced during search is Generalized Arc Consistency (GAC). In recent years, new algorithms that enforceconsistency properties stronger than GAC have been proposed and shown to be necessary to solve difficult problem instances.We frame the question of balancing the cost and the pruning effectiveness of consistency algorithms as the question of determining where, when, and how much of a higher-level consistency to enforce during search. To answer the ‘where’ question, we exploit the topological structure of a problem instance and target high-level consistency where cycle structures appear. To answer the ‘when’ question, we propose a simple, reactive, and effective strategy that monitors the performance of backtrack search and triggers a higher-level consistency as search thrashes. Lastly, for the question of ‘how much,’ we monitor the amount of updates caused by propagation and interrupt the process before it reaches a fixpoint. Empirical evaluations on benchmark problems demonstrate the effectiveness of our strategies
APA, Harvard, Vancouver, ISO, and other styles
20

Gennari, Rosella. "Mapping Inferences: Constraint Propagation and Diamond Satisfaction." Diss., Universiteit van Amsterdam, 2002. http://hdl.handle.net/10919/71553.

Full text
Abstract:
The main theme shared by the two main parts of this thesis is EFFICIENT AUTOMATED REASONING.Part I is focussed on a general theory underpinning a number of efficient approximate algorithms for Constraint Satisfaction Problems (CSPs),the constraint propagation algorithms.In Chapter 3, we propose a Structured Generic Algorithm schema (SGI) for these algorithms. This iterates functions according to a certain strategy, i.e. by searching for a common fixpoint of the functions. A simple theory for SGI is developed by studying properties of functions and of the ways these influence the basic strategy. One of the primary objectives of our theorisation is thus the following: using SGI or some of its variations for DESCRIBINING and ANALISYING HOW the "pruning" and "propagation" process is carried through by constraint propagation algorithms.Hence, in Chapter 4, different domains of functions (e.g., domain orderings) are related to different classes of constraint propagation algorithms (e.g., arc consistency algorithms); thus each class of constraint propagation algorithms is associated with a "type" of function domains, and so separated from the others. Then we analys each such class: we distinguished functions on the same domains for their different ways of performing pruning (point or set based), and consequently differentiated between algorithms of the same class (e.g., AC-1 and AC-3 versus AC-4 or AC-5). Besides, we also show how properties of functions (e.g., commutativity or stationarity) are related to different strategies of propagation in constraint algorithms of the same class (see, for instance, AC-1 versus AC-3). In Chapter 5 we apply the SGI schema to the case of soft CSPs (a generalisation of CSPs with sort-of preferences), thereby clarifying some of the similarities and differences between the "classical" and soft constraint-propagation algorithms. Finally, in Chapter 6, we summarise and characterise all the functions used for constraint propagation; in fact, the other goal of our theorisation is abstracting WHICH functions, iterated as in SGI or its variations, perform the task of "pruning" or "propagation" of inconsistencies in constraint propagation algorithms.We focus on relations and relational structures in Part II of the thesis. More specifically, modal languages allow us to talk about various relational structures and their properties. Once the latter are formulated in a modal language, they can be passed to automated theorem provers and tested for satisfiability, with respect to certain modal logics. Our task, in this part, can be described as follows: determining the satisfiability of modal formulas in an efficient manner. In Chapter 8, we focus on one way of doing this: we refine the standard translation as the layered translation, and use existing theorem provers for first-order logic on the output of this refined translation. We provide ample experimental evidence on the improvements in performances that were obtained by means of the refinement.The refinement of the standard translation is based on the tree model property. This property is also used in the basic algorithm schema in Chapter 9 ---the original schema is due to~\cite{seb97}. The proposed algorithm proceeds layer by layer in the modal formula and in its candidate models, applying constraint propagation and satisfaction algorithms for finite CSPs at each layer. With Chapter 9, we wish to draw the attention of constraint programmers to modal logics, and of modal logicians to CSPs.Modal logics themselves express interesting problems in terms of relations and unary predicates, like temporal reasoning tasks. On the other hand, constraint algorithms manipulate relations in the form of constraints, and unary predicates in the form of domains or unary constraints, see Chapter 6. Thus the question of how efficiently those algorithms can be applied to modal reasoning problems seems quite natural and challenging.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Dinesh. "Boundary-constrained inverse consistent image registration and its applications." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1006.

Full text
Abstract:
This dissertation presents a new inverse consistent image registration (ICIR) method called boundary-constrained inverse consistent image registration (BICIR). ICIR algorithms jointly estimate the forward and reverse transformations between two images while minimizing the inverse consistency error (ICE). The ICE at a point is defined as the distance between the starting and ending location of a point mapped through the forward transformation and then the reverse transformation. The novelty of the BICIR method is that a region of interest (ROI) in one image is registered with its corresponding ROI. This is accomplished by first registering the boundaries of the ROIs and then matching the interiors of the ROIs using intensity registration. The advantages of this approach include providing better registration at the boundary of the ROI, eliminating registration errors caused by registering regions outside the ROI, and theoretically minimizing computation time since only the ROIs are registered. The first step of the BICIR algorithm is to inverse consistently register the boundaries of the ROIs. The resulting forward and reverse boundary transformations are extended to the entire ROI domains using the Element Free Galerkin Method (EFGM). The transformations produced by the EFGM are then made inverse consistent by iteratively minimizing the ICE. These transformations are used as initial conditions for inverse-consistent intensity-based registration of the ROI interiors. Weighted extended B-splines (WEB-splines) are used to parameterize the transformations. WEB-splines are used instead of B-splines since WEB-splines can be defined over an arbitrarily shaped ROI. Results are presented showing that the BICIR method provides better registration of 2D and 3D anatomical images than the small-deformation, inverse-consistent, linear-elastic (SICLE) image registration algorithm which registers entire images. Specifically, the BICIR method produced registration results with lower similarity cost, reduced boundary matching error, increased ROI relative overlap, and lower inverse consistency error than the SICLE algorithm.
APA, Harvard, Vancouver, ISO, and other styles
22

Geraldo, Issa Cherif. "On the consistency of some constrained maximum likelihood estimator used in crash data modelling." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10184/document.

Full text
Abstract:
L'ensemble des méthodes statistiques utilisées dans la modélisation de données nécessite la recherche de solutions optimales locales mais aussi l’estimation de la précision (écart-type) liée à ces solutions. Ces méthodes consistent à optimiser, par approximations itératives, la fonction de vraisemblance ou une version approchée. Classiquement, on utilise des versions adaptées de la méthode de Newton-Raphson ou des scores de Fisher. Du fait qu'elles nécessitent des inversions matricielles, ces méthodes peuvent être complexes à mettre en œuvre numériquement en grandes dimensions ou lorsque les matrices impliquées ne sont pas inversibles. Pour contourner ces difficultés, des procédures itératives ne nécessitant pas d’inversion matricielle telles que les algorithmes MM (Minorization-Maximization) ont été proposées et sont considérés comme pertinents pour les problèmes en grandes dimensions et pour certaines distributions discrètes multivariées. Parmi les nouvelles approches proposées dans le cadre de la modélisation en sécurité routière, figure un algorithme nommé algorithme cyclique itératif (CA). Cette thèse a un double objectif. Le premier est d'étudier l'algorithme CA des points de vue algorithmique et stochastique; le second est de généraliser l'algorithme cyclique itératif à des modèles plus complexes intégrant des distributions discrètes multivariées et de comparer la performance de l’algorithme CA généralisé à celle de ses compétiteurs
Most of the statistical methods used in data modeling require the search for local optimal solutions but also the estimation of standard errors linked to these solutions. These methods consist in maximizing by successive approximations the likelihood function or its approximation. Generally, one uses numerical methods adapted from the Newton-Raphson method or Fisher’s scoring. Because they require matrix inversions, these methods can be complex to implement numerically in large dimensions or when involved matrices are not invertible. To overcome these difficulties, iterative procedures requiring no matrix inversion such as MM (Minorization-Maximization) algorithms have been proposed and are considered to be efficient for problems in large dimensions and some multivariate discrete distributions. Among the new approaches proposed for data modeling in road safety, is an algorithm called iterative cyclic algorithm (CA). This thesis has two main objectives: (a) the first is to study the convergence properties of the cyclic algorithm from both numerical and stochastic viewpoints and (b) the second is to generalize the CA to more general models integrating discrete multivariate distributions and compare the performance of the generalized CA to those of its competitors
APA, Harvard, Vancouver, ISO, and other styles
23

Konopacky, Q. M., C. Marois, B. A. Macintosh, R. Galicher, T. S. Barman, S. A. Metchev, and B. Zuckerman. "ASTROMETRIC MONITORING OF THE HR 8799 PLANETS: ORBIT CONSTRAINTS FROM SELF-CONSISTENT MEASUREMENTS." IOP PUBLISHING LTD, 2016. http://hdl.handle.net/10150/621227.

Full text
Abstract:
We present new astrometric measurements from our ongoing monitoring campaign of the HR 8799 directly imaged planetary system. These new data points were obtained with NIRC2 on the W.M. Keck II 10 m telescope between 2009 and 2014. In addition, we present updated astrometry from previously published observations in 2007 and 2008. All data were reduced using the SOSIE algorithm, which accounts for systematic biases present in previously published observations. This allows us to construct a self-consistent data set derived entirely from NIRC2 data alone. From this data set, we detect acceleration for two of the planets (HR 8799b and e) at >3 sigma. We also assess possible orbital parameters for each of the four planets independently. We find no statistically significant difference in the allowed inclinations of the planets. Fitting the astrometry while forcing coplanarity also returns chi(2) consistent to within 1 sigma of the best fit values, suggesting that if inclination offsets of less than or similar to 20 degrees are present, they are not detectable with current data. Our orbital fits also favor low eccentricities, consistent with predictions from dynamical modeling. We also find period distributions consistent to within 1 sigma with a 1:2:4:8 resonance between all planets. This analysis demonstrates the importance of minimizing astrometric systematics when fitting for solutions to highly undersampled orbits.
APA, Harvard, Vancouver, ISO, and other styles
24

Ruiz, Fuertes María Idoia. "On the Consistency, Characterization, Adaptability and Integrity of Database Replication Systems." Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/11800.

Full text
Abstract:
Desde la aparición de las primeras bases de datos distribuidas hasta los actuales sistemas de replicación modernos, la comunidad de investigación ha propuesto múltiples protocolos para administrar la distribución y replicación de datos, junto con algoritmos de control de concurrencia para manejar las transacciones en ejecución en todos los nodos del sistema. Muchos protocolos están disponibles, por tanto, cada uno con diferentes características y rendimiento, y garantizando diferentes niveles de coherencia. Para saber qué protocolo de replicación es el más adecuado, dos aspectos deben ser considerados: el nivel necesario de coherencia y aislamiento (es decir, el criterio de corrección), y las propiedades del sistema (es decir, el escenario), que determinará el rendimiento alcanzable. Con relación a los criterios de corrección, la serialización de una copia es ampliamente aceptada como el más alto nivel de corrección. Sin embargo, su definición permite diferentes interpretaciones en cuanto a la coherencia de réplicas. En esta tesis se establece una correspondencia entre los modelos de coherencia de memoria, tal como se definen en el ámbito de la memoria compartida distribuida, y los posibles niveles de coherencia de réplicas, definiendo así nuevos criterios de corrección que corresponden a las diferentes interpretaciones identificadas sobre la serialización de una copia. Una vez seleccionado el criterio de corrección, el rendimiento alcanzable por un sistema depende en gran medida del escenario, es decir, de la suma del entorno del sistema y de las aplicaciones que se ejecutan en él. Para que el administrador pueda seleccionar un protocolo de replicación apropiado, los protocolos disponibles deben conocerse plena y profundamente. Una buena descripción de cada candidato es fundamental, pero un marco común es imperativo para comparar las diferentes opciones y estimar su rendimiento en un escenario dado. Los resultados presentados en esta tesis cumplen los objetivos establecidos y constituyen una contribución al estado del arte de la replicación de bases de datos en el momento en que se iniciaron los trabajos respectivos. Estos resultados son relevantes, además, porque abren la puerta a posibles contribuciones futuras.
Ruiz Fuertes, MI. (2011). On the Consistency, Characterization, Adaptability and Integrity of Database Replication Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11800
Palancia
APA, Harvard, Vancouver, ISO, and other styles
25

Densing, Martin. "Hydro-electric power plant dispatch-planning : multi-stage stochastic programming with time-consistent constraints on risk /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lawrence, Shawn A. (Shawn Adam) 1975. "Kinematically consistent, elastic block model for the eastern Mediterranean constrained by GPS measurements." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/54506.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2003.
Includes bibliographical references (p. 43-59).
I use a Global Positioning System (GPS) velocity field to constrain block models of the eastern Mediterranean and surrounding regions that account for the angular velocities of constituent blocks and elastic strain accumulation on block-bounding faults in the interseismic period. Kinematically consistent fault slip rates and locking depths are estimated by this method. Eleven blocks are considered, including the major plates, based largely on previous geodetic, seismic, and geologic studies: Eurasia (EU), Nubia (NU), Arabia (AR), Anatolia (AN), Caucasus (CA), South Aegea (AE), Central Greece (GR), North Aegea (NE), Southeast Aegea (SE), Macedonia (MA), and Adria (AD). Two models are presented, one in which the best-fitting locking depth for the entire region (-15 km) is used on all boundaries (Model A), and one in which shallower locking depths are used on the Marmara Fault, the Hellenic and Cyprus Arcs, and in the Greater Caucasus (Model B), based on a consideration of locally best-fitting locking depths. An additional block, Black Sea (BS), is postulated in a third model. The models are in fair to good agreement with the results of previous studies of plate motion, fault slip rates, seismic moment rates and paleomagnetic rotations. Notably, some block pairs in the Aegean region have Euler poles on, or near to, their common boundaries, in qualitative agreement with so-called pinned block models, e.g., for the transfer of slip from the right-lateral North Anatolian Fault system to a set of left-lateral and normal faults in central and northern Greece (McKenzie and Jackson, 1983; Taymaz et al., 1991a; Goldsworthy et al., 2002).
(cont.) In addition, roughly three-quarters of the deformation in the Hellenic Arc and Greater Caucasus appears to be aseismic, in approximate agreement with previous studies (Jackson and McKenzie, 1988; Jackson, 1992). Increased data coverage will better constrain block motions, the locations of boundaries and the applicability of this method.
by Shawn A. Lawrence.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
27

Zaichenkov, Pavel. "A method for consistent non-local configuration of component interfaces." Thesis, University of Hertfordshire, 2017. http://hdl.handle.net/2299/19053.

Full text
Abstract:
Service-oriented computing is a popular technology that facilitates the development of large-scale distributed systems. However, the modular composition and flexible coordination of such applications still remains challenging for the following reasons: 1) the services are provided as loosely coupled black boxes that only expose their interfaces to the environment; 2) interacting services are not usually known in advance: web services are dynamically chosen to fulfil certain roles and are often replaced by services with a similar functionality; 3) the nature of the service-based application is decentralised. Loose coupling of web services is often lost when it comes to the construction of an application from services. The reason is that the object-oriented paradigm, which is widely used in the implementation of web services, does not provide a mechanism for service interface self-tuning. As a result, it negatively impacts upon the interoperability of web services. In this dissertation we present a formal method for automatic service configuration in the presence of subtyping, polymorphism, and flow inheritance. This is a challenging problem. On the one hand, the interface description language must be flexible enough to maintain service compatibility in various contexts without any modification to the service itself. On the other hand, the composition of interfaces in a distributed environment must be provably consistent. Our method is based on constraint satisfaction and Boolean satisfiability. First, we define a language for specifying service interfaces in a generic form, which is compatible with a variety of contexts. The language provides support for parametric polymorphism, Boolean variables, which are used to control dependencies between any elements of interface collections, and flow inheritance using extensible records and variants. We implemented the method as a constraint satisfaction solver. In addition to this, we present a protocol for interface configuration. It specifies a sequence of steps that leads to the generation of context-specific service libraries from generic services. Furthermore, we developed a toolchain that performs a complete interface configuration for services written in C++. We integrated support for flexible interface objects (i.e. objects that can be transferred in the application along with their structural description). Although the protocol relies solely on interfaces and does not take behaviour concerns into account, it is capable of finding discrepancies between input and output interfaces for simple stateful services, which only perform message synchronisation. Two running examples (a three buyers use-case and an image processing application) are used along the way to illustrate our approach. Our results seem to be useful for service providers that run their services in the cloud. The reason is twofold. Firstly, interfaces and the code behind them can be generic as long as they are sufficiently configurable. No communication between service designers is necessary in order to ensure consistency in the design. Instead, the interface correspondence in the application is ensured by the constraint satisfaction algorithm, which we have already designed. Secondly, the configuration and compilation of every service are separated from the rest of the application. This prevents source code leaks in proprietary software which is running in the cloud.
APA, Harvard, Vancouver, ISO, and other styles
28

Sadeghi, Rezvan. "Consistency of global and local scheduling decisions in semiconductor manufacturing." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEM023.

Full text
Abstract:
Le niveau opérationnel dans la fabrication de semi-conducteurs peut être divisé en un niveau global et un niveau local. Le niveau global est associé aux décisions d’ordonnancement et de contrôle de la production pour l’ensemble de l’unité de fabrication (fab), tandis que le niveau local traite de ces problèmes dans chaque atelier. Le niveau global établit des objectifs ou des contraintes au niveau local. Dans cette thèse, nous proposons un cadre général qui vise à contrôler les décisions prises au niveau local pour assurer la cohérence entre les décisions d’ordonnancement aux niveaux global et local. Le cadre est composé de deux niveaux. Le niveau inférieur comprend les politiques locales utilisées dans chaque atelier. Le niveau supérieur comprend les objectifs globaux, les informations globales et une stratégie globale qui est au coeur de ce cadre. La stratégie globale proposée vise à contrôler les politiques locales ainsi que les processus de production. L’idée est de gérer périodiquement la stratégie globale, en même temps que que la production, pour guider le processus de production vers la réalisation des objectifs globaux et assurer ainsi une cohérence entre les décisions prises aux niveaux global et local. Nous proposons deux types de stratégie globale : (1) une stratégie basée sur l’évaluation qui vise à améliorer le processus de production sans garantie de déterminer une solution optimale et (2) une stratégie d’optimisation basée sur un modèle de programmation linéaire. Afin d’évaluer la performance du cadre proposé, nous avons développé un modèle de simulation générique basé sur les données pour les systèmes de fabrication de semi-conducteurs. Le modèle de simulation, développé avec le logiciel AnyLogic, est une combinaison de méthodes de simulation multi-agents et de simulation à événements discrets. Étant donné que le solveur standard IBM ILOG CPLEX est utilisé pour résoudre le modèle de programmation linéaire, nous décrivons son intégration avec AnyLogic. Un ensemble d’expérimentations sur des instances industrielles sont présentées et discutées. En outre, cette thèse traite de la gestion des contraintes de temps. Dans une usine de fabrication de semi-conducteurs, les contraintes de temps sont associées à deux étapes du processus pour assurer le rendement et la qualité des lots. Une contrainte de temps correspond à un temps maximal qu’un lot ne doit pas dépasser entre les deux étapes. Si une contrainte de temps n’est pas satisfaite, le lot sera mis au rebut ou traité à nouveau. Par conséquent, parce que les équipements de fabrication sont onéreux et que les temps de cycle doivent être minimisés, il est important de contrôler efficacement le démarrage des lots dans les contraintes de temps. Nous proposons une approche qui estime la probabilité de satisfaire une contrainte de temps avant de démarrer la première étape de la contrainte. Cette approche a été mise en oeuvre et validée sur des données industrielles
The operational level in semiconductor manufacturing can be divided into a global level and a local level. The global level refers to the scheduling decisions and production control for the whole manufacturing facility (fab), while the local level deals with those issues in each work area. The global level provides objectives or constraints for the local level. In this thesis, we propose a general framework which aims at supporting and controlling the decisions taken at the local level to deal with consistency problems between global and local scheduling decisions. The framework is composed of two layers. The bottom layer includes local policies used in each work center. The top layer consists of global objectives, global information and a global strategy which is the core of this framework. The proposed global strategy aims at controlling local policies as well as production processes. The idea is to periodically run the global strategy while production is performed to guide the production process towards achieving global objectives, and thus ensuring consistency between decisions taken at the global and local levels. We propose two types of global strategy: (1) An evaluation-based strategy which aims at improving the production process with no guarantee to determine an optimal solution and (2) An optimization-based strategy, based on a Linear Programming model. In order to evaluate the performance of the proposed framework, we develop a data-driven generic simulation model for semiconductor manufacturing facilities. The simulation model is a combination of Agent-Based and Discrete Event modelling methods developed with the software AnyLogic. Since the standard solver IBM ILOG CPLEX is used to solve the linear programming model, we describe its integration with AnyLogic. A set of experiments on industrial instances are presented and discussed. In addition, this thesis deals with the management of time constraints. In a semiconductor manufacturing facility, time constraints are associated to two process steps to ensure the yield and quality of lots. A time constraint corresponds to a maximum time that a lot can spend between the two steps. If a time constraint is not satisfied by a lot, this lot will be scrapped or reprocessed. Therefore, because manufacturing equipment is expensive and cycle times must be minimized, efficiently controlling the start of lots in time constraints is important. We propose an approach which estimates the probability of satisfying a time constraint before starting a lot in the first step of the time constraint. This approach was implemented and validated on industrial constraints
APA, Harvard, Vancouver, ISO, and other styles
29

Janečková, Jitka. "Použití programování s omezujícími podmínkami při řešení diskrétních úloh." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-81927.

Full text
Abstract:
Application of constraint programming (CP) is one of the possible ways of solving discrete problems. It can be used for both search for feasible solution and optimization. CP offers a whole range of approaches for either a solution search or for acceleration of the process of its search -- from search algorithms or consistency techniques to propagation algorithms, which are basically only a combination of the two preceding methods. For optimization we most often use branch and bound approach, which differs in some aspects from a method of the same name used in mathematical programming (MP). Comparison of CP and MP is interesting in many other aspects. With CP the formulation of problems is more flexible, which allows for creation of often simpler and smaller models. On the other hand, its disadvantage is a limited use: Constraint satisfaction (optimisation) problem, as we call the constraint programming problem, cannot contain any discrete variables. CP is suitable especially for problems with a lot of constraints and only few variables, ideally only two. In the beginning, the paper introduces the basic terms of constraint programming, then it describes algorithms and techniques used for solving discrete problems and compares CP with mathematical programming.
APA, Harvard, Vancouver, ISO, and other styles
30

Khansalar, Ehsan. "The consistent estimation of future cash flow and future earnings : a predictive model with accounting double entry constraint." Thesis, University of Sussex, 2011. http://sro.sussex.ac.uk/id/eprint/7402/.

Full text
Abstract:
In empirical financial accounting research, there continues to be a debate as to what the best predictors of future earnings and future cash flows might be. Past accruals, earnings and cash flows are the most common predictors, but there is no consensus over their relative contributions, and little attention to the underlying accounting identities that link the components of these three prominent variables. The aim of this thesis is to investigate this controversy further, and to apply an innovative method which yields consistent estimations of future earnings and cash flows, with higher precision and greater efficiency than is the case in published results to date. The estimation imposes constraints based on financial statement articulation, using a system of structural regressions and a framework of simultaneous linear equations, which allows for the most basic property of accounting - double entry book-keeping - to be incorporated as a set of constraints within the model. In predicting future cash flows, the results imply that the constrained model which observes the double entry condition is superior to the models that are not constrained in this way, producing (a) rational signs consistent with expectations, not only in the entire sample but also in each industry, (b) evidence that double entry holds, based on the Wald test that the estimated marginal responses sum to zero, and (c) confirmation of model improvement by way of a higher likelihood and greater precision attached to predictor variables. Furthermore, by then using an appropriately specified model that observes the double entry constraint in order to predict earnings, the thesis reports statistically significant results, across all industries, that cash flows are superior to accruals in explaining future earnings, indicating also that accruals with a lower level of reliability tend to be more relevant in this respect.
APA, Harvard, Vancouver, ISO, and other styles
31

Hosseinyalamdary, Saivash Hosseinyalamdary. "Traffic Scene Perception using Multiple Sensors for Vehicular Safety Purposes." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462803166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wahbi, Mohamed. "Algorithms and Ordering Heuristics for Distributed Constraint Satisfaction Problems." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00718537.

Full text
Abstract:
Les problèmes de satisfaction de contraintes distribués (DisCSP) permettent de formaliser divers problèmes qui se situent dans l'intelligence artificielle distribuée. Ces problèmes consistent à trouver une combinaison cohérente des actions de plusieurs agents. Durant cette thèse nous avons apporté plusieurs contributions dans le cadre des DisCSPs. Premièrement, nous avons proposé le Nogood-Based Asynchronous Forward-Checking (AFC-ng). Dans AFC-ng, les agents utilisent les nogoods pour justifier chaque suppression d'une valeur du domaine de chaque variable. Outre l'utilisation des nogoods, plusieurs backtracks simultanés venant de différents agents vers différentes destinations sont autorisés. En deuxième lieu, nous exploitons les caractéristiques intrinsèques du réseau de contraintes pour exécuter plusieurs processus de recherche AFC-ng d'une manière asynchrone à travers chaque branche du pseudo-arborescence obtenu à partir du graphe de contraintes dans l'algorithme Asynchronous Forward-Checking Tree (AFC-tree). Puis, nous proposons deux nouveaux algorithmes de recherche synchrones basés sur le même mécanisme que notre AFC-ng. Cependant, au lieu de maintenir le forward checking sur les agents non encore instanciés, nous proposons de maintenir la consistance d'arc. Ensuite, nous proposons Agile Asynchronous Backtracking (Agile-ABT), un algorithme de changement d'ordre asynchrone qui s'affranchit des restrictions habituelles des algorithmes de backtracking asynchrone. Puis, nous avons proposé une nouvelle méthode correcte pour comparer les ordres dans ABT_DO-Retro. Cette méthode détermine l'ordre le plus pertinent en comparant les indices des agents dès que les compteurs d'une position donnée dans le timestamp sont égaux. Finalement, nous présentons une nouvelle version entièrement restructurée de la plateforme DisChoco pour résoudre les problèmes de satisfaction et d'optimisation de contraintes distribués.
APA, Harvard, Vancouver, ISO, and other styles
33

Bin, Hammam Ghassan Mohammed. "Whole-Body Motion Retargeting for Humanoids." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408367811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Fei [Verfasser], Wolfgang [Akademischer Betreuer] Reinhardt, and Anders [Akademischer Betreuer] Östman. "Handling Data Consistency through Spatial Data Integrity Rules in Constraint Decision Tables / Fei Wang. Universität der Bundeswehr München, Fakultät für Bauingenieur- und Vermessungswesen. Gutachter: Wolfgang Reinhardt ; Anders Östman. Betreuer: Wolfgang Reinhardt." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2008. http://d-nb.info/1062723678/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

"Solving finite domain constraint hierarchies by local consistency and tree search." 2002. http://library.cuhk.edu.hk/record=b5891083.

Full text
Abstract:
by Hui Kau Cheung Henry.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (leaves 107-112).
Abstracts in English and Chinese.
Abstract --- p.ii
Acknowledgments --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation --- p.1
Chapter 1.2 --- Organizations of the Thesis --- p.2
Chapter 2 --- Background --- p.4
Chapter 2.1 --- Constraint Satisfaction Problems --- p.4
Chapter 2.1.1 --- Local Consistency Algorithm --- p.5
Chapter 2.1.2 --- Backtracking Solver --- p.8
Chapter 2.1.3 --- The Branch-and-Bound Algorithm --- p.10
Chapter 2.2 --- Over-constrained Problems --- p.14
Chapter 2.2.1 --- Weighted Constraint Satisfaction Problems --- p.15
Chapter 2.2.2 --- Possibilistic Constraint Satisfaction Problems --- p.15
Chapter 2.2.3 --- Fuzzy Constraint Satisfaction Problems --- p.16
Chapter 2.2.4 --- Partial Constraint Satisfaction Problems --- p.17
Chapter 2.2.5 --- Semiring-Based Constraint Satisfaction Problems --- p.18
Chapter 2.2.6 --- Valued Constraint Satisfaction Problems --- p.22
Chapter 2.3 --- The Theory of Constraint Hierarchies --- p.23
Chapter 2.4 --- Related Work --- p.26
Chapter 2.4.1 --- An Incremental Hierarchical Constraint Solver --- p.28
Chapter 2.4.2 --- Transforming Constraint Hierarchies into Ordinary Con- straint System --- p.29
Chapter 2.4.3 --- The SCSP Framework --- p.30
Chapter 2.4.4 --- The DeltaStar Algorithm --- p.32
Chapter 2.4.5 --- A Plug-In Architecture of Constraint Hierarchy Solvers --- p.34
Chapter 3 --- Local Consistency in Constraint Hierarchies --- p.36
Chapter 3.1 --- A Reformulation of Constraint Hierarchies --- p.37
Chapter 3.1.1 --- Error Indicators --- p.37
Chapter 3.1.2 --- A Reformulation of Comparators --- p.38
Chapter 3.1.3 --- A Reformulation of Solution Set --- p.40
Chapter 3.2 --- Local Consistency in Classical CSPs --- p.41
Chapter 3.3 --- Local Consistency in SCSPs --- p.42
Chapter 3.4 --- Local Consistency in CHs --- p.46
Chapter 3.4.1 --- The Operations of Error Indicator --- p.47
Chapter 3.4.2 --- Constraint Hierarchy k-Consistency --- p.49
Chapter 3.4.3 --- A Comparsion between CHAC and PAC --- p.50
Chapter 3.4.4 --- The CHAC Algorithm --- p.52
Chapter 3.4.5 --- Time and Space Complexities of the CHAC Algorithm --- p.53
Chapter 3.4.6 --- Correctness of the CHAC Algorithm --- p.56
Chapter 4 --- A Consistency-Based Finite Domain Constraint Hierarchy Solver --- p.59
Chapter 4.1 --- The Branch-and-Bound CHAC Solver --- p.59
Chapter 4.2 --- Correctness of the Branch-and-Bound CHAC Solver --- p.61
Chapter 4.3 --- An Example Execution Trace --- p.64
Chapter 4.4 --- Experiments and Results --- p.66
Chapter 4.4.1 --- Experimental Setup --- p.68
Chapter 4.4.2 --- The First Experiment --- p.71
Chapter 4.4.3 --- The Second Experiment --- p.94
Chapter 5 --- Concluding Remarks --- p.103
Chapter 5.1 --- Summary and Contributions --- p.103
Chapter 5.2 --- Future Work --- p.104
Bibliography --- p.107
APA, Harvard, Vancouver, ISO, and other styles
36

lin, hsin-nan, and 林信男. "The Research of Maintaining Consistency on Process Timing Constraints." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/00911137600739596735.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
88
The advances of information technologies have forced many enterprises to reconsider the way their business processes are conducted. Among the various information technologies, workflow management systems (WFMSs) are widely recognized as an effective tool to greatly improve the efficiency of business processes and customers’ satisfaction. Today, a great number of commercial WFMSs have been available on the market; however, none of them are very successful due to the lack of some important features. One of the features that are needed by many business processes is the specification and enforcement of time constraints. In this thesis, we propose a time constraint model that helps workflow designer to define and verify time constraints. Different constraints may be verified at different times, e.g., definition time, invocation time, execution time. A workflow instance, once detected as a violation of some time constraint, could be terminated immediately to avoid the waste of precious resources as well as to provide prompt response to users. A variety of algorithms for verifying time constraints are proposed and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
37

Mück, Alexander. "The standard model in 5D : theoretical consistency and experimental constraints." Doctoral thesis, 2004. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-10591.

Full text
Abstract:
The four-dimensional Minkowski space is known to be a good description for space-time down to the length scales probed by the latest high-energy experiments. Nevertheless, there is the viable and exciting possibility that additional space-time structure will be observable in the next generation of collider experiments. Hence, we discuss different extensions of the standard model of particle physics with an extra dimension at the TeV-scale. We assume that some of the gauge and Higgs bosons propagate in one additional spatial dimension, while matter fields are confined to a four-dimensional subspace, the usual Minkowski space. After compactification on an S^1/Z_2 orbifold, an effective four-dimensional theory is obtained where towers of Kaluza-Klein (KK) modes, in addition to the standard model fields, reflect the higher-dimensional structure of space-time. The models are elaborated from the 5D Lagrangian to the Feynman rules of the KK modes. Special attention is paid to an appropriate generalization of the Rxi-gauge and the interplay between spontaneous symmetry breaking and compactification. Confronting the observables in 5D standard model extensions with combined precision measurements at the Z-boson pole and the latest data from LEP2, we constrain the possible size R of the extra dimension experimentally. A multi-parameter fit of all relevant input parameters leads to bounds for the compactification scale M=1/R in the range 4-6 TeV at the 2 sigma confidence level and shows how the mass of the Higgs boson is correlated with the size of an extra dimension. Considering a future linear e+e- collider, we outline the discovery potential for an extra dimension using the proposed TESLA specifications as an example. As a consistency check for the various models, we analyze Ward identities and the gauge boson equivalence theorem in W-pair production and find that gauge symmetry is preserved by a complex interplay of the Kaluza-Klein modes. In this context, we point out the close analogy between the traditional Higgs mechanism and mass generation for gauge bosons via compactification. Beyond the tree-level, the higher-dimensional models studied extensively in the literature and in the first part of this thesis have to be extended. We modify the models by the inclusion of brane kinetic terms which are required as counter terms. Again, we derive the corresponding 4D theory for the KK towers paying special attention to gauge fixing and spontaneous symmetry breaking. Finally, the phenomenological implications of the new brane kinetic terms are investigated in detail
Bis hin zu den kleinsten Längenskalen, die bisher in Hochenergieexperimenten getestet werden konnten, lässt sich die Natur auf der Basis des vierdimensionalen Minkowski-Raums beschreiben. Dennoch kann man die aufregende Möglichkeit nicht ausschließen, dass bereits die nächste Generation von Beschleunigerexperimenten eine zusätzliche Struktur der Raumzeit aufdecken wird. Daher betrachten wir verschiedene Erweiterungen des Standardmodells der Teilchenphysik mit einer zusätzlichen Dimension im TeV-Bereich. Wir nehmen an, dass einige oder alle Higgs- und Eichbosonen in einer fünften, raumartigen Dimension propagieren können, während die fermionische Materie auf den gewöhnlichen Minkowski-Raum beschränkt bleibt. Durch Kompaktifizierung auf ein S^1/Z_2 Orbifold wird eine effektive vierdimensionale Theorie abgeleitet, in der sich die zusätzliche Raumzeitstruktur durch ein Spektrum von Kaluza-Klein (KK) Moden widerspiegelt. Dabei werden die untersuchten Modelle, ausgehend von der 5D Lagrangedichte, bis hin zu den Feynmanregeln für die KK Moden ausgearbeitet. Insbesondere zeigen wir die konsistente Verallgemeinerung der Rxi-Eichung und untersuchen das Wechselspiel von spontaner Symmetriebrechung und Kompaktifizierung. Um den Radius R der fünften Dimension durch Messungen einzuschränken, benutzen wir sowohl Daten aus Experimenten auf der Z-Boson-Resonanz als auch LEP2 Wirkunsquerschnitte. Bei 2 sigma Signifikanz finden wir für die verschiedenen Modelle als untere Schranke für die Kompaktifizierungsskala M=1/R etwa 4-6 TeV. Mit Hilfe eines Multiparameterfits werden auch Korrelationen zwischen der Kompaktifizierungsskala und der Masse des Higgs-Bosons aufgezeigt. Darüber hinaus wird am Beispiel des TESLA-Projektes das Entdeckungspotential eines zukünftigen e+e- Linearbeschleunigers für zusätzliche Raumzeitstruktur ausgelotet. Als Konsistenztest für die verschiedenen Modelle untersuchen wir Ward-Identitäten und das Eichboson-Äquivalenztheorem am Beispiel der W-Boson-Paarproduktion, in der ein komplexes Zusammenspiel der KK Moden die Eichinvarianz sicherstellt. Des Weiteren wird die enge Analogie zwischen dem traditionellen Higgs-Mechanismus und der Massenerzeugung für Eichbosonen durch Kompaktifizierung herausgearbeitet. Auf Einschleifen-Niveau zeigt sich schließlich, dass die einfachsten, im ersten Teil dieser Arbeit sowie in der Literatur eingehend untersuchten Modelle erweitert werden müssen. Daher beziehen wir lokalisierte kinetische Terme ein, die als Counterterme benötigt werden. Für die so erweiterten Modelle leiten wir wiederum die effektive 4D Theorie für die KK Moden ab und untersuchen insbesondere die Eichfixierung und spontane Symmetriebrechung. Abschließend bestimmen wir den Einfluss der neuen kinetischen Terme auf die Phänomenologie
APA, Harvard, Vancouver, ISO, and other styles
38

Mück, Alexander [Verfasser]. "The standard model in 5D : theoretical consistency and experimental constraints / vorgelegt von Alexander Mück." 2004. http://d-nb.info/974408107/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Szpak, Zygmunt Ladyslaw. "Constrained parameter estimation in multiple view geometry." Thesis, 2013. http://hdl.handle.net/2440/82702.

Full text
Abstract:
Multiple view geometry is a branch of computer vision devoted entirely to the study of the relationship between images generated from a fixed three-dimensional scene. Thanks to the body of knowledge generated in this domain some of the most exciting developments in navigation have recently been realised. Google's release of Street-view maps is the most remarkable example. Currently there is a growing demand for new insight and knowledge originating from multiple view geometry, as two of the most popular technological companies, Google and Apple, embark on a mission to generate three-dimensional maps. The research conducted in this thesis makes a direct contribution to two specific problems that arise frequently in the context of multiple view geometry: homography estimation and ellipse fitting. A homography is used to establish a relationship between two images of a scene, whenever the scene consists of a flat surface. If the scene consists of several at surfaces, such as walls of buildings in urban environments, then multiple homographies are required to adequately represent the relationship between a pair of images. But when multiple homographies are required, computer vision practitioners typically estimate homographies separately. This thesis demonstrates that multiple homographies must not be estimated separately, because additional interhomography constraints need to be satisfied in order for a collection of homographies to accurately reflect the three-dimensional geometry of the scene. This thesis offers a comprehensive account of a variety of subtleties that arise in the estimation of multiple homographies, and presents detailed novel algorithms for fulfilling the estimation task. A central contribution is the development of a new framework for jointly estimating multiple homographies. The new framework leads to considerably more accurate homography estimates than previous approaches. The second major contribution of this thesis relates to another frequently encountered task in multiple view geometry: ellipse fitting. Recently many new cost functions promising unbiasedness, consistency or hyperaccuracy have been reported to improve the state-of-the-art in fitting ellipses to data. Unfortunately, the new cost functions have not been substantiated with thorough experimental comparisons. This thesis offers an extensive evaluation of both new and old ellipse fitting methods with the aid of comprehensive simulations. The findings suggest that there is not much difference between the newer and more established estimators. There is, however, a significant difference between the sole estimator that guarantees an ellipse fit, and other estimators which are prone to occasionally producing hyperbolas. The estimator that guarantees an ellipse fit is significantly less accurate. To remedy this undesirable discovery, a new ellipse estimator is proposed that shares a similar statistical accuracy to the unbiased, consistent or hyper-accurate estimators, but unlike all of these, still guarantees an ellipse fit.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2013
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Jing. "Functional Principal Component Analysis for Discretely Observed Functional Data and Sparse Fisher’s Discriminant Analysis with Thresholded Linear Constraints." 2016. http://scholarworks.gsu.edu/math_diss/35.

Full text
Abstract:
We propose a new method to perform functional principal component analysis (FPCA) for discretely observed functional data by solving successive optimization problems. The new framework can be applied to both regularly and irregularly observed data, and to both dense and sparse data. Our method does not require estimates of the individual sample functions or the covariance functions. Hence, it can be used to analyze functional data with multidimensional arguments (e.g. random surfaces). Furthermore, it can be applied to many processes and models with complicated or nonsmooth covariance functions. In our method, smoothness of eigenfunctions is controlled by directly imposing roughness penalties on eigenfunctions, which makes it more efficient and flexible to tune the smoothness. Efficient algorithms for solving the successive optimization problems are proposed. We provide the existence and characterization of the solutions to the successive optimization problems. The consistency of our method is also proved. Through simulations, we demonstrate that our method performs well in the cases with smooth samples curves, with discontinuous sample curves and nonsmooth covariance and with sample functions having two dimensional arguments (random surfaces), repectively. We apply our method to classification problems of retinal pigment epithelial cells in eyes of mice and to longitudinal CD4 counts data. In the second part of this dissertation, we propose a sparse Fisher’s discriminant analysis method with thresholded linear constraints. Various regularized linear discriminant analysis (LDA) methods have been proposed to address the problems of the LDA in high-dimensional settings. Asymptotic optimality has been established for some of these methods when there are only two classes. A difficulty in the asymptotic study for the multiclass classification is that for the two-class classification, the classification boundary is a hyperplane and an explicit formula for the classification error exists, however, in the case of multiclass, the boundary is usually complicated and no explicit formula for the error generally exists. Another difficulty in proving the asymptotic consistency and optimality for sparse Fisher’s discriminant analysis is that the covariance matrix is involved in the constraints of the optimization problems for high order components. It is not easy to estimate a general high-dimensional covariance matrix. Thus, we propose a sparse Fisher’s discriminant analysis method which avoids the estimation of the covariance matrix, provide asymptotic consistency results and the corresponding convergence rates for all components. To prove the asymptotic optimality, we provide an asymptotic upper bound for a general linear classification rule in the case of muticlass which is applied to our method to obtain the asymptotic optimality and the corresponding convergence rate. In the special case of two classes, our method achieves the same as or better convergence rates compared to the existing method. The proposed method is applied to multivariate functional data with wavelet transformations.
APA, Harvard, Vancouver, ISO, and other styles
41

chien-yu, Wen, and 溫建育. "The Construction and Consistency Validation Method for Current Reality Tree of Theory of Constraints - a Case of Automotive Components Industry." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/6985ea.

Full text
Abstract:
碩士
明新科技大學
工業工程與管理系碩士班
101
Many methods can be applied to improve the operating performance of enterprises, but they may be constrained to partial improvements, and finally become symptomatic solutions. Dr. Goldratt’s Thinking Process helps the enterprises find out the core problems in the organizational system and establish overall solutions by taking five major tools and the rigorous causal logic diagram from an overall perspective, to improve the enterprise’s competitiveness. So far, the Thinking Process has been widely applied to assist the organizational system to analyze problems and make improvements. In this process, whether the Current Reality Tree is constructed completely will affect the subsequent proposal of solutions. Currently, it lacks a complete method to obtain the Undesirable Effect of the system and construct a Current Reality Tree. As a result, the results are restricted to the available information and intangible subjective cognition of the researchers, which leads to some other problems in the system being ignored, and leaves a gap between the Current Reality Tree of the system and the actual situation. This study proposes an effective Consistency Validation Method for the process of constructing a Current Reality Tree for the Thinking Process. Including the importance of selecting adverse effect, data completeness, problem existence and causal logic rationality, the Consistency Validation Method can fully display the problem observed by each class of the organizational system, and truly present the opinions of all participants. Lastly, this study takes an auto parts manufacturer in Hsinchu Industrial Park as the study subject to verify whether the method proposed in this study is feasible, and whether the results can reach the consensus of all employees in the case company.
APA, Harvard, Vancouver, ISO, and other styles
42

"Consistency techniques for linear global cost functions in weighted constraint satisfaction." 2012. http://library.cuhk.edu.hk/record=b5549068.

Full text
Abstract:
在加權約束滿足問題中使用多元價值函數需要強大的一致相容性技術,而在多元價值函數中維護一致相容性並不是一項簡單的工作。能在多項式時間內找出多元價值函數的最少價值,而且不被投影及擴展操作所破壞,是讓該多元價值函數實用的主要條件。但是,有很多有用的多元價值函數尚未有多項式時間的算法找出其最少價值,因而未能在加權約束滿足問題中實用地使用它們。
我們定義了一類可被建構為整數線性規劃的多元價值函數,並稱它們為多項式線性投影安全(PLPS)價值函數。該類價值函數的最少價值能由解答整數線性規劃中找出,而這個特性並不會被投影及擴展操作所影響。線性鬆馳能讓我們找出一個最少價值的接近值,並避免了解答整數線性規劃的NP-難困難性。該最少價值的接近值能作為最少價值的下限以供維護鬆馳一致相容性概念。
在實踐中我們示範了使用PLPS價值函數的組合的好處。我們定義了整數多項式線性投影安全(IPLPS)價值函數作為PLPS價值函數的一個子類,並讓我們表示組合該類價值函數的好處。在一個加權約束滿足問題的一致相容性α中,我們表示了在IPLPS價值函數的組合中維護鬆馳α比在單獨的IPLPS價值函數中維護α強大。這結果可用在能在多項式時間中找出最少價值,但不能在多項式時間中找出它們的組合的最少價值的IPLPS價值函數中。基於流量投影安全(flow-based projection-safe)及可多項式分解(polynomially decomposable)價值函數的一個重要的子類屬於這一類的IPLPS價值函數。
在實驗中我們展示了我們的方法的可行性和效率。無論在時間或搜索空間的改進上,與現有的方法相比,在使用PLPS價值函數的組合和 IPLPS價值函數的組合時我們觀察到一個數量級的改進。
The solving of Weighted CSP (WCSP) with global cost functions relies on powerful consistency techniques, but enforcing these consistencies on global cost functions is not a trivial task. Lee and Leung suggest that a global cost function can be used practically if we can find its minimum cost and perform projections/extensions on it in polynomial time, and at the same time projections and extensions should not destroy those conditions. However, there are many useful cost functions with no known polynomial time algorithms to compute the minimum costs yet.
We propose a special class of global cost functions which can be modeled as integer linear programs, called polynomially linear projection-safe (PLPS) cost functions. We show that their minimum cost can be computed by integer programming and this property is unaffected by projections/extensions. By linear relaxation we can avoid the possible NP-hard time taken to solve the integer programs, as the approximation of their actual minimum costs can be obtained to serve as a good lower bound in enforcing the relaxed forms of common consistencies.
We show the benets of using the conjunctions of PLPS cost functions empir-ically in terms of runtime. We introduce integral polynomially linear projection-safe (IPLPS) cost functions as a subclass of PLPS cost functions whose allow us to characterize the benets of using the conjunctions of them. Given a standard WCSP consistency α, we give theorems showing that maintaining relaxed α on a conjunction of IPLPS cost functions is stronger than maintaining α on the individual cost functions. A useful application of our method is on some IPLPS global cost functions, whose minimum cost computations are tractable and yet those for their conjunctions are not. We show that an important subclass of flow-based projection-safe and polynomially decomposable cost functions falls into this category.
Experiments are conducted to demonstrate the feasibility and efciency of our framework. We observe orders of magnitude in runtime and search space improvements by using the conjunctions of PLPS and IPLPS cost functions with relaxed consistencies when compared with the existing approaches.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Shum, Yu Wai.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 87-92).
Abstracts also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Weighted Constraint Satisfaction Problems --- p.2
Chapter 1.2 --- Motivation and Goal --- p.2
Chapter 1.3 --- Outline of the Thesis --- p.4
Chapter 2 --- Related Work --- p.6
Chapter 2.1 --- Soft Constraint Frameworks --- p.6
Chapter 2.2 --- Integer Linear Programming --- p.8
Chapter 2.3 --- Global Cost Functions in WCSP --- p.8
Chapter 3 --- Background --- p.11
Chapter 3.1 --- Weighted Constraint Satisfaction Problems --- p.11
Chapter 3.1.1 --- Branch and Bound Search --- p.14
Chapter 3.1.2 --- Local consistencies in WCSP --- p.15
Chapter 3.1.3 --- Global Cost Functions --- p.30
Chapter 3.2 --- Integer Linear Programming --- p.31
Chapter 4 --- Polynomially Linear Projection-Safe Cost Functions --- p.33
Chapter 4.1 --- Non-tractable Global Cost Functions in WCSPs --- p.34
Chapter 4.2 --- Polynomially Linear Projection-Safe Cost Functions --- p.37
Chapter 4.3 --- Relaxed Consistencies on Polynomially Linear Projection-Safe Cost Functions --- p.44
Chapter 4.4 --- Conjoining Polynomially Linear Projection-Safe Cost Functions --- p.50
Chapter 4.5 --- Modeling Global Cost Functions as Polynomially Linear Projection- Safe Cost Functions --- p.53
Chapter 4.5.1 --- The SOFT SLIDINGSUM{U+1D48}{U+1D52}{U+1D9C} Cost Function --- p.53
Chapter 4.5.2 --- The SOFT EGCC{U+1D5B}{U+1D43}{U+02B3} Cost Function --- p.54
Chapter 4.5.3 --- The SOFT DISJUNCTIVE/CUMULATIVE Cost Function --- p.56
Chapter 4.6 --- Implementation Issues --- p.59
Chapter 4.7 --- Experimental Results --- p.60
Chapter 4.7.1 --- Generalized Car Sequencing Problem --- p.62
Chapter 4.7.2 --- Magic Series Problem --- p.63
Chapter 4.7.3 --- Weighted Tardiness Scheduling Problem --- p.65
Chapter 5 --- Integral Polynomially Linear Projection-Safe Cost Functions --- p.68
Chapter 5.1 --- Integral Polynomially Linear Projection-Safe Cost Functions --- p.69
Chapter 5.2 --- Conjoining Global Cost Functions as IPLPS --- p.72
Chapter 5.3 --- Experimental Results --- p.76
Chapter 5.3.1 --- Car Sequencing Problem --- p.77
Chapter 5.3.2 --- Examination Timetabling Problem --- p.78
Chapter 5.3.3 --- Fair Scheduling --- p.79
Chapter 5.3.4 --- Comparing WCSP Approach with Integer Linear programming Approach --- p.81
Chapter 6 --- Conclusions --- p.83
Chapter 6.1 --- Contributions --- p.83
Chapter 6.2 --- Future Work --- p.85
Bibliography --- p.87
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Tzu-Fu, and 王咨富. "Cooperative Data Caching in MANET with Data Consistency Constraint." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/70607412581054273361.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
96
Data caching is used to reduce energy consumption and response time for mobile computing environments. But it’s a challenge to maintain the consistency of the cached data items with respect to the data items stored in the data server. This is especially true in a MANET which features in multi-hop communication between mobile nodes and the mobility of the mobile nodes. It is because in a MANET a mobile node needs to go through many intermediate nodes to reach the data server to refresh its cache, while in a one-hop mobile computing envirnoment a mobile node can reach the data server with one-hop of communication which simplfies the task of maintaining cache consistency. Traditionally, broadcast approach is used to search for a valid copy of a cached data item in a MANET. Unfortunately, broadcast may consume immense energy and cause the broadcast storm problem in a MANET. In this paper, we proposed a tree-based data caching scheme in which the data items required by a mobile client are cached along a pre-scheduled path toward the data server. When the mobile client requires a data item, it will search for a valid copy along the pre-scheduled path. The tree-based data caching scheme uses unicast to search for valid copies which significantly reduces the energy consumption and the response time of a mobile client. Our experimental results show that the tree-based scheme outperforms the existing schemes in terms of the response time, number of packets required and the total energy consumption in answering a query.
APA, Harvard, Vancouver, ISO, and other styles
44

Jheng, Hao-Huei, and 鄭皓徽. "Graphical Consistency Constrained Analysis of AHP Judgment Matrices." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/75904349205289562826.

Full text
Abstract:
碩士
南台科技大學
工業管理研究所
97
The analytic hierarchy process (AHP) is one of the most commonly used multi-attributes decision analysis techniques, for the reason that its implementation steps are similar to commonly used analysis steps of human beings. It is even more popular in cases that quantitative and qualitative attributes combine, for example: in the areas of strategic decision and risk management. In this study, we set out from the meaning of judgment matrix, point out that its coefficients are perturbed measurement data of a preference structure, and then show that Saaty’s eigenvector method for judgment matrix analysis has the following theoretical weakness: (1) the calculation of eigenvector does not accord with the essence meaning of a judgment matrix; (2) the eigenvector method cannot give a sufficient analysis on the effect of experimental error on the priority vector; (3) its assumption that the perturbation in judgment matrix is small enough, may not hold in practical applications. In addition, this method cannot take the ordering relations of priority weights, found in the practice or from the matrix, into consideration. On contrast, the methods based on statistical regression not only can take these ordering relations into consideration, also have nice properties in the decision theory. Therefore, analyzing the judgment matrix by regression is more proper. Moreover, since the judgment matrix is used to represent a preference structure, it can be replaced by a fuzzy relation. Using the cut sets of the associated fuzzy relation, we define the graphical consistency and the graphical consistency constraints of a judgment matrix, then we consider the theoretical properties of the logarithm least squares method and the goal programming method with the graphical consistency constraints, and compare theses two methods with the most commonly used methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Gaspar, Rui. "Consistent... me? : barriers and constraints on proenvironmental behaviors." Doctoral thesis, 2009. http://hdl.handle.net/10451/966.

Full text
Abstract:
Tese de doutoramento, Psicologia (Psicologia Social), Universidade de Lisboa, Faculdade de Psicologia e de Ciências da Educação, 2010
Research on environmental behavior continues not to answer the question: why don't people behave the way we want them to? Or differently, what are the barriers and constraints to proenvironmental behaviors? For this a model is proposed and from this, habit is studied as a barrier, being defined as goal-directed automatic behavior that is mentally represented and can be triggered by environmental cues (e.g. Aarts & Dijksterhuis, 2000). Accordingly, it shares some characteristics with other types of mental representations such as stereotypes and attitudes for example, namely its dynamism which results from the interaction between the situation and cognitive processes. Based on the Theory of Systems Goals (Kruglanski et al., 2002) and the mental constructs activation rules (Higgins & Brendl, 1995), habits can also be considered to result from the interaction between 2 sources of activation: context applicability (in our studies defined as perceived applicability of habit to the means to attain the goal present in the decision context) and cognitive accessibility (through goal priming or chronicity). We manipulated these in a number of online shopping simulations organic vs. non-biological products buying or habitual vs. non-habitual products buying. Results demonstrate a perceived applicability effect with more habitual (studies 2 and 3) or non-biological (study 1) products chosen in a familiar than in a new context and greater consistency in non-organic choice within 3 choices in the same list (study 1). Strong habit participants (high chronic accessibility) consistently choose the habitual product even when context changes (study 2). The goal priming does not show the expected effect (study 2 and 3) and may indicate the presence of goal activation suppression effects (Study 2). Finally, results indicate that habit can resist the effect of implementation intentions (study 4) or be "broken" for certain context changes (studies 5a + 5 b)
Fundação para a Ciência e Tecnologia, III Quadro Comunitário de Apoio, Fundo Social Europeu e por fundos nacionais do Ministério da Ciência, Tecnologia e Ensino Superior (SFRH/BD/17719/2004)
APA, Harvard, Vancouver, ISO, and other styles
46

FOREMAN-MACKEY, DANIEL. "A Fully Self-Consistent Constraint on the Mass of M31 and the Local Group." Thesis, 2010. http://hdl.handle.net/1974/5998.

Full text
Abstract:
We present the first fully self-consistent, axisymmetric, dynamical model of the Andromeda galaxy (M31). We constrain the physical parameters of the model with datasets on all radial scales: the bulge projected velocity dispersion, rotation curve, surface brightness profile, and the kinematics of globular clusters and satellite galaxies. Combining these highly heterogeneous datasets into a single self-consistent analysis is natural in the framework of Bayesian inference. Using a geometric argument, we also infer the three-dimensional velocity of M31 relative to the Milky Way. From this orbit, we constrain the total mass of the Local Group by the ``timing argument''. We find that the virial mass of M31 is $M_\mathrm{M31,vir} = 5.0^{+2.2}_{-1.7} \times 10^{12} \, M_\odot$ and the mass of the Local Group is $M_\mathrm{LG} = 8.8^{+8.0}_{-4.2} \times 10^{12} \, M_\odot$. We conclude that the large uncertainties in our results are due primarily to the small sample size at large radii and that either a significantly larger sample or unjustifiably informative priors are necessary to improve the constraint.
Thesis (Master, Physics, Engineering Physics and Astronomy) -- Queen's University, 2010-08-27 08:32:52.823
APA, Harvard, Vancouver, ISO, and other styles
47

Barroso, Viviane Setti. "A consistent linear two-dimensional mathematical model for thin two-layer plates with partial shear interaction." Master's thesis, 2020. http://hdl.handle.net/10316/92236.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Civil apresentada à Faculdade de Ciências e Tecnologia
This dissertation presents a consistent derivation, from three-dimensional linear elasticity, of a two-dimensional mathematical model describing the bending and in-plane stretching behaviours, under a general system of quasi-static distributed loads, of thin two-layer plates with partial shear interaction. The following key assumptions are made:(i) Each layer, when considered separately, behaves as a Kirchhoff plate.(ii) The interlayer (with non-zero thickness), when considered separately, behaves as a transverse shear-only Mindlin plate.(iii) Each layer is bonded to the interlayer in such a way that both sliding and detachment are prevented.The dimensional reduction stage of the derivation, from three spatial dimensions to just two, is accomplished by means of Podio-Guidugli’s method of internal constraints. This is followed by a process of assembly or aggregation, in which the continuity of displacements and certain stress components across each layer/interlayer interface is enforced. A problem with closed-form analytical solution illustrates the application of the two-dimensional model and its capabilities. In particular, the solution is proven to be continuous across the whole range of zero, partial and full interaction between the layers. The problem is then generalized and a Navier-type solution is obtained. The results are compared with those reported in the literature. Possible applications of the model include the analysis of laminated glass plates under quasi-static short-term loads in service conditions and within a limited temperature range.
Nesta dissertação apresenta-se uma dedução consistente, a partir da teoria da elasticidade linear tridimensional, de um modelo matemático bidimensional que descreve o comportamento à flexão e no plano, sob um sistema geral de cargas distribuídas quase-estáticas, de placas finas de duas camadas com interação de corte parcial. Admitem-se as seguintes hipóteses fundamentais:(i) Cada camada, quando considerada isoladamente, comporta-se como uma placa de Kirchhoff.(ii) A intercamada (com espessura não nula), quando considerada isoladamente, comporta-se como uma placa de Mindlin e apresenta apenas resistência ao corte transversal.(iii) A ligação entre cada camada e a intercamada é perfeita, considerando-se assim impedidos tanto o deslizamento como o afastamento nessas superfícies de descontinuidade material. A etapa de redução do número de dimensões espaciais de três para duas é realizada por intermédio do método de restrições internas proposto por Podio-Guidugli. Segue-se um processo de agregação, no qual se impõe, em cada interface camada/intercamada, a continuidade dos deslocamentos e de certas componentes de tensão. Um problema com a solução analítica ilustra as potencialidades do modelo desenvolvido. Em particular, mostra-se que a solução é contínua em toda a gama de interação entre camadas, desde a interacção nula até à interacção total. Este problema é depois generalizado e obtém-se uma solução do tipo Navier. Os resultados são comparados com os disponíveis na literatura. De entre as possíveis aplicações do modelo, destaca-se a análise de placas de vidro laminado sob a acção de cargas quase-estáticas de curta duração, em condições de serviço e dentro de uma gama de temperaturas limitada.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography