Tesis sobre el tema "STATIC CONSTRAINTS"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: STATIC CONSTRAINTS.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "STATIC CONSTRAINTS".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Marlowe, Laura C. "A Static Scheduler for critical timing constraints". Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23406.

Texto completo
Resumen
Approved for public release; distribution is unlimited
The Computer Aided Prototyping System (CAPS) and the Prototype System Description Language (PSDL) represent a pioneering effort in the field of software development. The implementation of CAPS will enable software engineers to automatically validate design specifications and functional requirements early in the design of a software system through the development and execution of a prototype of the system under construction. Execution of the prototype is controlled by an Execution Support System (ESS) within the framework of CAPS. One of the critical elements of the ESS is the Static Scheduler which extracts critical timing constraints and precedence information about operators from the PSDL source that describes the prototype. The Static Scheduler then uses this information to determine whether a feasible schedule can be built, and if it can, constructs the schedule for operator execution within the prototype.
http://archive.org/details/staticschedulerf00marl
Lieutenant Commander, United States Navy
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bhavnagarwala, Azeez Jenúddin. "Voltage scaling constraints for static CMOS logic and memory cirucits". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/15401.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Abbas, Abdullah. "Static analysis of semantic web queries with ShEx schema constraints". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM064/document.

Texto completo
Resumen
La disponibilité de gros volumes de données structurées selon le modèle Resource Description Framework (RDF) est en constante augmentation. Cette situation implique un intérêt scientifique et un besoin important de rechercher de nouvelles méthodes d’analyse et de compilation de requêtes pour tirer le meilleur parti de l’extraction de données RDF. SPARQL est le plus utilisé et le mieux supporté des langages de requêtes sur des données RDF. En parallèle des langages de requêtes, les langages de définition de schéma d’expression de contraintes sur des jeux de données RDF ont également évolués. Les Shape Expressions (ShEx) sont de plus en plus utilisées pour valider des données RDF et pour indiquer les motifs de graphes attendus. Les schémas sont importants pour les tâches d’analyse statique telles que l’optimisation ou l’injection de requêtes. Notre intention est d’examiner les moyens et méthodologies d’analyse statique et d’optimisation de requêtes associés à des contraintes de schéma.Notre contribution se divise en deux grandes parties. Dans la première, nous considérons le problème de l’injection de requêtes SPARQL en présence de contraintes ShEx. Nous proposons une procédure rigoureuse et complète pour le problème de l’injection de requêtes avec ShEx, en prenant en charge plusieurs fragments de SPARQL. Plus particulièrement, notre procédure gère les patterns de requêtes OPTIONAL, qui s’avèrent former un important fonctionnalité à étudier avec les schémas. Nous fournissons ensuite les limites de complexité de notre problème en considération des fragments gérés. Nous proposons également une méthode alternative pour l’injection de requêtes SPARQL avec ShEx. Celle-ci réduit le problème à une satisfiabilité de Logique de Premier Ordre, qui permet de considérer une extension du fragment SPARQL traité par la première méthode. Il s’agit de la première étude traitant l’injection de requêtes SPARQL en présence de contraintes ShEx.Dans la seconde partie de nos contributions, nous proposons une méthode d’analyse pour optimiser l’évaluation de requêtes SPARQL groupées, sur des graphes RDF, en tirant avantage des contraintes ShEx. Notre optimisation s’appuie sur le calcul et l’assignation de rangs aux triple patterns d’une requête, permettant de déterminer leur ordre d’exécution. La présence de jointures intermédiaires entre ces patterns est la raison pour laquelle l’ordonnancement est important pour gagner en efficicacité. Nous définissons un ensemble de schémas ShEx bien- formulés, qui possède d’intéressantes caractéristiques pour l’optimisation de requêtes SPARQL. Nous développons ensuite notre méthode d’optimisation par l’exploitation d’informations extraites d’un schéma ShEx. Enfin, nous rendons compte des résultats des évaluations effectuées, montrant les avantages de l’application de notre optimisation face à l’état de l’art des systèmes d’évaluation de requêtes
Data structured in the Resource Description Framework (RDF) are increasingly available in large volumes. This leads to a major need and research interest in novel methods for query analysis and compilation for making the most of RDF data extraction. SPARQL is the widely used and well supported standard query language for RDF data. In parallel to query language evolutions, schema languages for expressing constraints on RDF datasets also evolve. Shape Expressions (ShEx) are increasingly used to validate RDF data, and to communicate expected graph patterns. Schemas in general are important for static analysis tasks such as query optimisation and containment. Our purpose is to investigate the means and methodologies for SPARQL query static analysis and optimisation in the presence of ShEx schema constraints.Our contribution is mainly divided into two parts. In the first part we consider the problem of SPARQL query containment in the presence of ShEx constraints. We propose a sound and complete procedure for the problem of containment with ShEx, considering several SPARQL fragments. Particularly our procedure considers OPTIONAL query patterns, that turns out to be an important feature to be studied with schemas. We provide complexity bounds for the containment problem with respect to the language fragments considered. We also propose alternative method for SPARQL query containment with ShEx by reduction into First Order Logic satisfiability, which allows for considering SPARQL fragment extension in comparison to the first method. This is the first work addressing SPARQL query containment in the presence of ShEx constraints.In the second part of our contribution we propose an analysis method to optimise the evaluation of conjunctive SPARQL queries, on RDF graphs, by taking advantage of ShEx constraints. The optimisation is based on computing and assigning ranks to query triple patterns, dictating their order of execution. The presence of intermediate joins between the query triple patterns is the reason why ordering is important in increasing efficiency. We define a set of well-formed ShEx schemas, that possess interesting characteristics for SPARQL query optimisation. We then develop our optimisation method by exploiting information extracted from a ShEx schema. We finally report on evaluation results performed showing the advantages of applying our optimisation on the top of an existing state-of-the-art query evaluation system
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Grayland, Andrews. "Automated static symmetry breaking in constraint satisfaction problems". Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/1718.

Texto completo
Resumen
Variable symmetries in constraint satisfaction problems can be broken by adding lexicographic ordering constraints. Existing general methods of generating such sets of ordering constraints can produce a huge number of additional constraints. This adds an unacceptable overhead to the solving process. Methods exist by which this large set of constraints can be reduced to a much smaller set automatically, but their application is also prohibitively costly. In contrast, this thesis takes a bottom up approach to generating symmetry breaking constraints. This will involve examining some commonly-occurring families of mathematical groups and deriving a general formula to produce a minimal set of ordering constraints which are sufficient to break all of the symmetry that each group describes. In some cases it is known that there exists no manageable sized sets of constraints to break all symmetries. One example of this occurs with matrix row and column symmetries. In such cases, incomplete symmetry breaking has been used to great effect. Double lex is a commonly used incomplete symmetry breaking technique for row and column symmetries. This thesis also describes another similar method which compares favourably to double lex. The general formulae investigated are used as building blocks to generate small sets of ordering constraints for more complex groups, constructed by combining smaller groups. Through the utilisation of graph automorphism tools and the groups and permutations software GAP we provide a method of defining variable symmetries in a problem as a group. Where this group can be described as the product of smaller groups, with known general formulae, we can construct a minimal set of ordering constraints for that problem automatically. In summary, this thesis provides the theoretical background necessary to apply efficient static symmetry breaking to constraint satisfaction problems. It also goes further, describing how this process can be automated to remove the necessity of having an expert CP practitioner, thus opening the field to a larger number of potential users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Niedert, Michael D. "Static-task scheduling incorporating precedence constraints and deadlines in a heterogeneous-computing environment". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA380969.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kafle, Bishoksan. "Modeling assembly program with constraints. A contribution to WCET problem". Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7968.

Texto completo
Resumen
Dissertação para obtenção do Grau de Mestre em Lógica Computacional
Model checking with program slicing has been successfully applied to compute Worst Case Execution Time (WCET) of a program running in a given hardware. This method lacks path feasibility analysis and suffers from the following problems: The model checker (MC) explores exponential number of program paths irrespective of their feasibility. This limits the scalability of this method to multiple path programs. And the witness trace returned by the MC corresponding to WCET may not be feasible (executable). This may result in a solution which is not tight i.e., it overestimates the actual WCET. This thesis complements the above method with path feasibility analysis and addresses these problems. To achieve this: we first validate the witness trace returned by the MC and generate test data if it is executable. For this we generate constraints over a trace and solve a constraint satisfaction problem. Experiment shows that 33% of these traces (obtained while computing WCET on standard WCET benchmark programs) are infeasible. Second, we use constraint solving technique to compute approximate WCET solely based on the program (without taking into account the hardware characteristics), and suggest some feasible and probable worst case paths which can produce WCET. Each of these paths forms an input to the MC. The more precise WCET then can be computed on these paths using the above method. The maximum of all these is the WCET. In addition this, we provide a mechanism to compute an upper bound of over approximation for WCET computed using model checking method. This effort of combining constraint solving technique with model checking takes advantages of their strengths and makes WCET computation scalable and amenable to hardware changes. We use our technique to compute WCET on standard benchmark programs from M¨alardalen University and compare our results with results from model checking method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Nelson, Andrew P. "Funqual: User-Defined, Statically-Checked Call Graph Constraints in C++". DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1848.

Texto completo
Resumen
Static analysis tools can aid programmers by reporting potential programming mistakes prior to the execution of a program. Funqual is a static analysis tool that reads C++17 code ``in the wild'' and checks that the function call graph follows a set of rules which can be defined by the user. This sort of analysis can help the programmer to avoid errors such as accidentally calling blocking functions in time-sensitive contexts or accidentally allocating memory in heap-sensitive environments. To accomplish this, we create a type system whereby functions can be given user-defined type qualifiers and where users can define their own restrictions on the call graph based on these type qualifiers. We demonstrate that this tool, when used with hand-crafted rules, can catch certain types of errors which commonly occur in the wild. We claim that this tool can be used in a production setting to catch certain kinds of errors in code before that code is even run.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lu, Tingting. "Effects of Multimedia on Motivation, Learning and Performance: The Role of Prior Experience and Task Constraints". The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218660147.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ungwattanapanit, Tanut [Verfasser], Horst [Akademischer Betreuer] Baier, Horst [Gutachter] Baier y Kai-Uwe [Gutachter] Bletzinger. "Optimization of Steered-Fibers Composite Stiffened Panels including Postbuckling Constraints handled via Equivalent Static Loads / Tanut Ungwattanapanit ; Gutachter: Horst Baier, Kai-Uwe Bletzinger ; Betreuer: Horst Baier". München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1152384082/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Saglam, Hueseyin. "A toolkit for static analysis of constraint logic programs". Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262739.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Barcenas, Patino Ismael. "Raisonnement automatisé sur les arbres avec des contraintes de cardinalité". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00569058.

Texto completo
Resumen
Les contraintes arithmétiques sont largement utilisées dans les langages formels comme les expressions, les grammaires d'arbres et les chemins réguliers. Ces contraintes sont utilisées dans les modéles de contenu des types (XML Schemas) pour imposer des bornes sur le nombre d'occurrences de nœuds. Dans les langages de requêtes (XPath, XQuery), ces contraintes permettent de sélectionner les nœuds ayant un nombre limité de nœuds accessibles par une expression de chemin donnée. Les types et chemins étendus avec les contraintes de comptage constituent le prolongement naturel de leurs homologues sans comptage déjà considérés comme des constructions fondamentales dans les langages de programmation et les systèmes de type pour XML. Un des défis majeurs en programmation XML consiste à développer des techniques automatisées permettant d'assurer statiquement un typage correct et des optimisations de programmes manipulant les données XML. À cette fin, il est nécessaire de résoudre certaines tâches de raisonnement qui impliquent des constructions telles que les types et les expressions XPath avec des contraintes de comptage. Dans un futur proche, les compilateurs de programmes XML devront résoudre des problèmes de base tels que le sous-typage afin de s'assurer au moment de la compilation qu'un programme ne pourra jamais générer de documents non valides à l'exécution. Cette thèse étudie les logiques capables d'exprimer des contraintes de comptage sur les structures d'arbres. Il a été montré récemment que le mu-calcul sur les graphes, lorsqu'il est étendu à des contraintes de comptage portant exclusivement sur les nœuds successeurs immédiats est indécidable. Dans cette thèse, nous montrons que, sur les arbres finis, la logique avec contraintes de comptage est décidable en temps exponentiel. En outre, cette logique fournit des opérateurs de comptage selon des chemins plus généraux. En effet, la logique peut exprimer des contraintes numériques sur le nombre de nœuds descendants ou même ascendants. Nous présentons également des traductions linéaires d'expressions XPath et de types XML comportant des contraintes de comptage dans la logique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Guziolowski, Carito. "Analysis of Large-Scale Biological Networks with Constraint-Based Approaches over Static Models". Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00541903.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Dsouza, Michael Dylan. "Fast Static Learning and Inductive Reasoning with Applications to ATPG Problems". Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51591.

Texto completo
Resumen
Relations among various nodes in the circuit, as captured by static and inductive invariants, have shown to have a positive impact on a wide range of EDA applications. Techniques such as boolean constraint propagation for static learning and assume-then-verify approach to reason about inductive invariants have been possible due to efficient SAT solvers. Although a significant amount of research effort has been dedicated to the development of effective invariant learning techniques over the years, the computation time for deriving powerful multi-node invariants is still a bottleneck for large circuits. Fast computation of static and inductive invariants is the primary focus of this thesis. We present a novel technique to reduce the cost of static learning by intelligently identifying redundant computations that may not yield new invariants, thereby achieving significant speedup. The process of inductive invariant reasoning relies on the assume-then-verify framework, which requires multiple iterations to complete, making it infeasible for cases with a large set of multi-node invariants. We present filtering techniques that can be applied to a diverse set of multi-node invariants to achieve a significant boost in performance of the invariant checker. Mining and reasoning about all possible potential multi-node invariants is simply infeasible. To alleviate this problem, strategies that narrow down the focus on specific types of powerful multi-node invariants are also presented. Experimental results reflect the promise of these techniques. As a measure of quality, the invariants are utilized for untestable fault identification and to constrain ATPG for path delay fault testing, with positive results.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kusano, Markus Jan Urban. "Constraint-Based Thread-Modular Abstract Interpretation". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84399.

Texto completo
Resumen
In this dissertation, I present a set of novel constraint-based thread-modular abstract-interpretation techniques for static analysis of concurrent programs. Specifically, I integrate a lightweight constraint solver into a thread-modular abstract interpreter to reason about inter-thread interference more accurately. Then, I show how to extend the new analyzer from programs running on sequentially consistent memory to programs running on weak memory. Finally, I show how to perform incremental abstract interpretation, with and without the previously mentioned constraint solver, by analyzing only regions of the program impacted by a program modification. I also demonstrate, through experiments, that these new constraint-based static analyzers are significantly more accurate than prior abstract interpretation-based static analyzers, with lower runtime overhead, and that the incremental technique can drastically reduce runtime overhead in the presence of small program modifications.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Dijkstra, Erik J. "Constrained Optimization for Prediction of Posture". Licentiate thesis, KTH, Mekanik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187488.

Texto completo
Resumen
The ability to stand still in one place is important in a variety of activities of daily living. For persons with motion disorders, orthopaedic treatment, which changes geometric or biomechanical properties, can improve the individual'sposture and walking ability. Decisions on such treatment require insight in how posture and walking ability are aected, however, despite expectations based on experience, it is never a-priori known how a patient will react to a treatment. As this is very challenging to observe by the naked eye, engineering tools are increasingly employed to support clinical diagnostics and treatment planning. The development of predictive simulations allows for the evaluation of the eect of changed biomechanical parameters on the human biological system behavior and could become a valuable tool in future clinical decision making. In the first paper, we evaluated the use of the Zero Moment Point as a computationally inexpensive tool to obtain the ground reaction forces (GRFs) for normal human gait. The method was applied on ten healthy subjects walking in a motion analysis laboratory and predicted GRFs are evaluated against the simultaneously measured force plate data. Apart from the antero-posterior forces, GRFs are well-predicted and errors fall within the error ranges from other published methods. The computationally inexpensive method evaluated in this study can reasonably well predict the GRFs for normal human gait without using prior knowledge of common gait kinetics. The second manuscript addresses the complications in the creation and analysis of a posture prediction framework. The fmincon optimization function in MATLAB was used in conjunction with a musculoskeletal model in OpenSim. One clear local minimum was found in the form of a symmetric standing posture but perturbation analyses revealed the presence of many other postural congurations, each representing its own unique local minimum in the feasible parameter space. For human postural stance, this can translate to there being many different ways of standing without actually noticing a difference in the efforts required for these poses.

This work was financially supported by the Swedish Scientic Council(Vetenskapsrådet) grant no. 2010-9401-79187-68, the ProMobilia handicapfoundation (ref. 13093), Sunnerdahls Handicap foundation (ansökan nr 11/14),and Norrbacka-Eugenia foundation (ansökan nr 218/15).

Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Harth, Petr. "Nalezení fyzické polohy stanic v síti Internet pomocí měření přenosového zpoždění". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219773.

Texto completo
Resumen
This diploma thesis is concerned with practical realization of CBG (Constraint-based Geolocation) algorithm, which is one of the IP (Internet Protocol) geolocation technique. IP geolocation determines the localization of a computer workstation location on the basis of on its IP address. The factors causing delays in data transfer are discussed first, followed by discussion of the issue of measuring these delays. The detailed explanation of IP geolocation follows where its contexts as well as the active geolocation techniques (techniques based on delay measurement mentioned above) are described. After that a brief description of PlanetLab experimental network, which was used for geolocation techniques measuring, is presented followed by a section explaining the creation of reference points and targets, which are another necessary prerequisite for practical realization of the method. Then the practical realization is explained in the form of CBGfinder program and its verification on the basis of artificial input data along with an actual example of IP geolocation of a point in the Internet are provided. Last but not least the measurement results of CBG algorithm are introduced, based on the analysis of Bestline parameters of one of the PlanetLab nodes measured in the period of one month, followed by a discussion of the inaccuracy of geological position and the computation speed. The cumulative distribution function as well as the kernel density estimation are also described. Final part of the thesis consists of discussion on measured results compared to results of other geological techniques results implemented by colleagues of the author of this diploma thesis. The results are compared on the basis of average inaccuracy of geological position estimations and its median, computation time, cumulative distribution function and kernel density estimation are also taken into regard.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Horák, Michael. "Určení polohy stanic v síti Internet pomocí přenosového zpoždění". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220305.

Texto completo
Resumen
This thesis covers the topic of determination of geographical location of a host in internet network while utilizing measurement of the end to end delay and implementation of Constraint-Based Geolocation. Gradually I go through issue of the delay in computer networks and ways how to measure it. Next chapter describes a few ways to geolocate host in internet network with emphasis on the CBG method. Another chapter is dedicated to describing a way to project spherical coordinates to the two dimensional space, which has been used in implementation of geolocation method. Chapter about implementation builds upon the facts given in previous chapters while functions of the program written in the JAVA programing language are being explained. Two similar geolocation methods were implemented. By comparing the results gained by implementation, new method of geolocation is proposed and devised. It combines properities of both previous methods. There are results of the implemented methods and their comparation to the one of the source documents used in creation of this thesis in summary section.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Hawkins, Penelope Anne. "Financial constraints and the small open economy". Thesis, University of Stirling, 2000. http://hdl.handle.net/1893/21628.

Texto completo
Resumen
The thesis develops a new model of the small open economy emphasizing financial constraints, based on the notion of liquidity preference as a constraining tendency on the income adjustment process. Preference for liquid assets results in a number of financial states of constraint, such as financial vulnerability, financial exclusion and financial fragility. These are explored in a regional and international context. Openness brings with it new opportunity as well as potential constraints. Models of small open economies have in general assumed away the latter and have neglected the consequences of financial openness. This is reflected in the absence of a means to identify economies as small and open on the basis of their financial exposure. The financial vulnerability index is developed to address this deficit. Applied to twenty-one countries, the index reveals that emerging countries can be classified as small open economies constrained by preference for liquid assets. Policies designed with the conventional approach to constraints in mind appear to be inappropriate for these countries. The concept of constraints has rarely been dealt with explicitly and a possible categorisation of constraints for mainstream and Post Keynesian schools is developed. It proves to be a useful point of entry for grasping ontological differences between schools. It also provides insights into the constraining tendencies facing the small open economy, and how they can be managed. When these insights are applied to the South African economy, the current macroeconomic policy, and critiques thereof, are found to be wanting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Wilson, Marie Elaine. "Collective bargaining in higher education: A model of statutory constraint". Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185108.

Texto completo
Resumen
This dissertation explores the impact of the state public sector legal environment as a determinant of the governance content of faculty collective bargaining agreements. Using content analysis, the legal environment and contractual content are reduced to quantities that may be explored through the lens of population ecology. Legal environment is determined to have a significant impact on the development of contractual content and individual factors of governance and statutory form are identified. Specifically, the statutory scope language and reservation of management rights are seen as the primary environmental forces determining policy and rule issues in contractual content. Further, the relevant temporal element for an ecological model appears to be the tenure of public sector bargaining in each state. National affiliation, institutional type and other temporal variables do not have a significant impact on governance language. Implications and directions for further research are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Cardenas, Lucena Carolina. "Contribución al control geométrico de sistemas de eventos discretos en el álgebra max-plus". Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0004/document.

Texto completo
Resumen
Le travail présenté s'inscrit dans le contexte de la théorie des systèmes linéaires dans les dioïdes. La motivation initiale de cette étude a été de contribuer à l'analyse et la commande de systèmes linéaires dans max-plus en utilisant spécifiquement une approche géométrique. La contribution de cette thèse est centrée sur deux problèmes. La première partie est dédiée à l'étude de la relation entre les notions d'invariance contrôlée et d'invariance contrôlée par retour d'état dynamique dans un semi-anneau. Cette relation permet de montrer l'équivalence de ces deux notions. La deuxième partie concerne un problème original dans la théorie des systèmes linéaires dans max-plus, il s'agit de la synthèse d'une loi de commande par retour d'état, qui permette de satisfaire un ensemble de spécifications exprimées sous la forme de restrictions sur l'état du système, avec une approche géométrique. Il s'agit plus précisément de commander des systèmes à événements discrets décrits par un modèle linéaire dans max-plus. Nous définissons et caractérisons l'ensemble des conditions initiales admissibles, lesquelles sont à l'origine de solutions non décroissantes. Les restrictions temporelles imposées à l'espace d'état du système sont décrites par le semi-module défini par l'image de l'étoile de Kleene de la matrice associée aux restrictions temporelles. Les propriétés géométriques de ce semi-module permettant de garantir que l'évolution du système en boucle fermée satisfasse les restrictions sont étudiées. Des conditions suffisantes concernant l'existence d'une loi de commande causale par retour d'état statique sont présentées. Le calcul des lois de commande causales est également présenté. Pour illustrer l'application de cette approche, deux problèmes de commande sont présentés
This work is in the context of the theory of linear Systems in the dioids. The initial motivation of this study was to contribute to the analysis and control of max-plus linear systems, specifically using a geometric approach. The contribution of this thesis focuses on two issues. The first part is dedicated to study of the relationship between the concepts of controlled invariance and dynamic state feedback controlled invariance in a semi-ring. This relationship allows us to show the equivalence of these two concepts. The second part relates to a new problem in the theory of max-plus linear systems, it is the synthesis, with a geometric approach, of a static state feedback control law, in order to satisfy a set of specifications that apply to the state space of the system. This is specifically to control of discrete event systems described by a linear model in max-plus. We define and characterize the set of admissible initial conditions, which are the cause of non-decreasing solutions. Temporal restrictions on the system state space are described by the semi-module defined by the image of the Kleene star of the matrix associated with time restrictions. The geometric properties of this semi-module are studied. Sufficient conditions for the existence of a causal control law by static feedback are presented. Calculating causal control laws is also presented. To illustrate the application of this approach, two control problems are presented
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Baykal, Mustafa. "NATO transformation : prospects and constraints on bridging the capability gap /". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FBaykal.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Saum-Manning, Lisa L. "Avenues of influence a study of domestic constraints on the U.S. national security policy-making process /". Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1372034521&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Wallace, Thomas Henry. "Capital constraints to the acquisition of new technology by small business in high technology industries". Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/30347.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Toroczkai, Zoltan. "Analytic Results for Hopping Models with Excluded Volume Constraint". Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30481.

Texto completo
Resumen
Part I: The Theory of Brownian Vacancy Driven Walk We analyze the lattice walk performed by a tagged member of an infinite 'sea' of particles filling a d-dimensional lattice, in the presence of a single vacancy. The vacancy is allowed to be occupied with probability 1/2d by any of its 2d nearest neighbors, so that it executes a Brownian walk. Particle-particle exchange is forbidden; the only interaction between them being hard core exclusion. Thus, the tagged particle, differing from the others only by its tag, moves only when it exchanges places with the hole. In this sense, it is a random walk "driven" by the Brownian vacancy. The probability distributions for its displacement and for the number of steps taken, after n-steps of the vacancy, are derived. Neither is a Gaussian! We also show that the only nontrivial dimension where the walk is recurrent is d=2. As an application, we compute the expected energy shift caused by a Brownian vacancy in a model for an extreme anisotropic binary alloy. In the last chapter we present a Monte-Carlo study and a mean-field analysis for interface erosion caused by mobile vacancies. Part II: One-Dimensional Periodic Hopping Models with Broken Translational Invariance.Case of a Mobile Directional Impurity We study a random walk on a one-dimensional periodic lattice with arbitrary hopping rates. Further, the lattice contains a single mobile, directional impurity (defect bond), across which the rate is fixed at another arbitrary value. Due to the defect, translational invariance is broken, even if all other rates are identical. The structure of Master equations lead naturally to the introduction of a new entity, associated with the walker-impurity pair which we call the quasi-walker. Analytic solution for the distributions in the steady state limit is obtained. The velocities and diffusion constants for both the random walker and impurity are given, being simply related to that of the quasi-particle through physically meaningful equations. As an application, we extend the Duke-Rubinstein reputation model of gel electrophoresis to include polymers with impurities and give the exact distribution of the steady state.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Ng, Mo Ching Norma. "Commercial constraints and news content : a comparative study of quality newspapers in France and in the U.S". HKBU Institutional Repository, 2004. http://repository.hkbu.edu.hk/etd_ra/611.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kaymak, Yalcin. "A Composite Frame/joint Super Element For Structures Strengthened By Externally Bonded Steel/frp Plates". Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1052547/index.pdf.

Texto completo
Resumen
A materially non-linear layered beam super element is developed for the analysis of RC beams and columns strengthened by externally bonded steel/FRP plates. The elasto-plastic behavior of RC member is incorporated by its internally generated or externally supplied moment-curvature diagram. The steel plate is assumed to be elasto-plastic and the FRP laminate is assumed to behave linearly elastic up to rupture. The thin epoxy layer between the RC member and the externally bonded lamina is simulated by a special interface element which allows for the changing failure modes from steel plate yielding/FRP plate rupture to separation of the bonded plates as a result of bond failure in the epoxy layer. An empirical failure criterion based on test results is used for the epoxy material of the interface. The most critical aspect of such applications in real life frame structures is the anchorage conditions at the member ends and junctions. This has direct influence on the success and the effectiveness of the application. Therefore, a special corner piece anchorage element is also considered in the formulation of the joint super element, which establishes the fixity and continuity conditions at the member ends and the joints.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Hickerson, Jon D. (Jon David). "The Impact of Corporate Interlocks on Power and Constraint in the Telecommunications Industry". Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc500891/.

Texto completo
Resumen
Using the tools of structural and network analysis developed by Ronald R. Burt and others, this study investigated the communication patterns among corporate officers of American Telephone and Telegraph Corporation (A.T. & T.) and United Telecommunications Corporation (Sprint). Data on contacts, efficiency, network density, and constraint indicate that opportunities for power and constraint have remained relatively stable at United Telecommunications between 1980 and 1990. A. A.T. & T., on the other hand, was more affected by the drastic changes in the telecommunication industry. The span of A.T. & T. has grown smaller and the potential for constraining relations among A. T. & T. and financial institutions has increased during the period 1980 and 1990.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

McLaughlin, Josetta S. "Operationalalizing social contract: application of relational contract theory to exploration of constraints on implementation of an employee assistance program". Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/39741.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Nasri, Amin. "On the Dynamics and Statics of Power System Operation : Optimal Utilization of FACTS Devicesand Management of Wind Power Uncertainty". Doctoral thesis, KTH, Elektriska energisystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154576.

Texto completo
Resumen
Nowadays, power systems are dealing with some new challenges raisedby the major changes that have been taken place since 80’s, e.g., deregu-lation in electricity markets, significant increase of electricity demands andmore recently large-scale integration of renewable energy resources such aswind power. Therefore, system operators must make some adjustments toaccommodate these changes into the future of power systems.One of the main challenges is maintaining the system stability since theextra stress caused by the above changes reduces the stability margin, andmay lead to rise of many undesirable phenomena. The other important chal-lenge is to cope with uncertainty and variability of renewable energy sourceswhich make power systems to become more stochastic in nature, and lesscontrollable.Flexible AC Transmission Systems (FACTS) have emerged as a solutionto help power systems with these new challenges. This thesis aims to ap-propriately utilize such devices in order to increase the transmission capacityand flexibility, improve the dynamic behavior of power systems and integratemore renewable energy into the system. To this end, the most appropriatelocations and settings of these controllable devices need to be determined.This thesis mainly looks at (i) rotor angle stability, i.e., small signal andtransient stability (ii) system operation under wind uncertainty. In the firstpart of this thesis, trajectory sensitivity analysis is used to determine themost suitable placement of FACTS devices for improving rotor angle sta-bility, while in the second part, optimal settings of such devices are foundto maximize the level of wind power integration. As a general conclusion,it was demonstrated that FACTS devices, installed in proper locations andtuned appropriately, are effective means to enhance the system stability andto handle wind uncertainty.The last objective of this thesis work is to propose an efficient solutionapproach based on Benders’ decomposition to solve a network-constrained acunit commitment problem in a wind-integrated power system. The numericalresults show validity, accuracy and efficiency of the proposed approach.

The Doctoral Degrees issued upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology and KTH Royal Institute of Technology. The invested degrees are official in Spain, the Netherlands and Sweden, respectively.QC 20141028

Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Grose, Roger T. "Cost-constrained project scheduling with task durations and costs that may increase over time demonstrated with the U.S. Army future combat systems /". Thesis, View thesis via NPS View thesis via DTIC, 2004. http://handle.dtic.mil/100.2/ADA424957.

Texto completo
Resumen
Thesis (M.S. in Operations Research)--Naval Postgraduate School, 2004.
Title from title screen (viewed June 28, 2005). "June 2004." Includes bibliographical references (p. 59-61). Also issued in paper format.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Tortochot, Éric. "Pour une didactique de la conception. Les étudiants en design et les formes d'énonciation de la conception". Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3013.

Texto completo
Resumen
L'activité de conception peut être analysée grâce aux signes produits par les sujets concepteurs et par le discours qu'ils tiennent sur les nouveaux modèles d'artéfacts désirés qui en résultent. L'analyse psycho-sémiologique de l'activité d'étudiants en Master professionnel de design montre deux processus. D'une part, pour résoudre les problèmes abordés, les étudiants rendent compte seuls régulièrement de leur activité en énonçant par divers instruments de représentations les différentes tâches accomplies. D'autre part, pour élaborer les différents modèles d'artéfacts, ils falsifient les contraintes que les enseignants leurs imposent plus ou moins avec insistance. Parce qu'ils communiquent sur le travail qu'ils réalisent et parce qu'ils dialoguent avec des interlocuteurs, les étudiants prennent conscience de l'activité de conception en train de se faire. C'est aussi de cette façon qu'ils organisent des métaconnaissances, des valeurs, des habiletés, c'est-à-dire une compétence de conception. Cette compétence est fondée non pas seulement sur la reproduction d'héritages conceptuels et méthodologiques, sur des routines ou sur des habitudes, mais sur de véritables dispositions à remettre en cause les héritages par des stratégies opportunistes. Réfléchir sur une didactique de la conception, c'est retenir que les étudiants constituent eux-mêmes, aidés par les enseignants, un renouvellement de l'activité professionnelle de conception. Cette thèse tente de montrer qu'une didactique de la conception, afin d'être formalisée, doit prendre en compte le processus d'énonciation comme activité cognitive indispensable à l'acquisition d'une compétence de conception
Design activity can be analyzed through the signs produced by design developers and their discourse on the new models of artifacts they wished to develop. Psycho-semiotic analysis of student activity in Professional Master of Design shows two processes. First, to solve problems, students report regularly and alone their activities with statements of the different tasks using various representation instruments. Second, to develop different models of artifacts, they falsify the constraints the teachers required with more or less insistence. As students communicate about their work and interact with many people they become aware of their ongoing design activity and organize metaknowledge, values, skills, meaning design abilities. All these acquired abilities are not only based upon the reproduction of conceptual and methodological legacies, routines or habits, but on real inclinations to challenge these legacies using opportunistic strategies. To think about design didactics allows us to understand that students, assisted by teachers, induce themselves a renewal of the design activity. This thesis is an attempt to show that a design didactics, to be formalized, requires taking into account the statement process as an essential cognitive activity for the acquisition of design skills
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Abate, Domenico. "Modelling and control of RFX-mod tokamak equilibria". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3421955.

Texto completo
Resumen
The subject that concerns this thesis is the modelling and control of plasma equilibria in the RFX-mod device operating as shaped tokamak. The aim was to develop an overall model of the plasma-conductors-controller system of RFX-mod shaped tokamak configuration for electromagnetic control purposes, with particular focus on vertical stability. Thus, the RFX-mod device is described by models of increasing complexity and involving both theoretical and experimental data. The CREATE-L code is used to develop 2D linearized plasma response models, with simplifying assumptions on the conducting structures (axisymmetric approximations). Such models, thanks to their simplicity, have been used for feedback controller design. The CarMa0 code is used to develop linearized plasma response models, but considering a detailed 3D description of the conducting structures. These models provide useful hints on the accuracy of the simplified models and on the importance of 3D structures in the plasma dynamics. The CarMa0NL code is used to model the time evolution of plasma equilibria, by taking into account also nonlinear effects which can come into play during specific phases (e.g. disruptions, limiter-to-divertor transitions, L-H transition etc.). The activity can be divided into two main parts: the first one involves the modelling of numerically generated low-β plasmas, which are used as a reference for the design and implementation of the plasma shape and position control system; the second part is related to the results of the experimental campaigns on shaped plasmas from low-β to H-mode regime, with particular efforts on the development of a novel plasma response model for the new equilibrium regimes achieved. Several challenges and peculiarities characterize the project in both the modelling and control frameworks. Strong plasma shape and different plasma regimes (i.e. low-β to H-mode plasmas), deeply affect the modelling activity and require the development of several numerical tools and methods of analysis. From the control system point of view, non-totally observable dynamic and model order reduction requirements allowed a full application of the model based approach in order to successfully design the plasma shape and vertical stability control system. The first part is based on theoretical data generated by the MAXFEA equilibrium code and used to derive the linearized model through the CREATE-L code. Two reference models have been produced for the magnetic configurations interested in shaped operations: the lower single null (LSN) and the upper single null (USN). The CREATE-L models are the most simple in terms of modelling complexity, because the conducting structures are described within the axisymmetric approximation. On the other hand, the simple but reliable properties of the CREATE-L model led to the successful design of the RFX-mod plasma shape and control system, which has been successfully tested and used to increase plasma performances involved in the second part of the thesis. Then, an investigation on the possible 3D effects of the conducting structures on these numerically generated plasma configurations has been carried out by producing plasma linearized models with an increased level of complexity. A detailed 3D volumetric description of the conducting structures of RFX-mod has been carried out and included in the plasma linearized models through the CarMa0 code. A comparison between the accuracy of this model and the previous 2D one has been performed. The different assumptions and approximations of the various models allow a clear identification of the key phenomena ruling the evolution of the n=0 vertical instability in RFX-mod tokamak discharges, and hence, provide fundamental information in the planning and the execution of related experiments and in refining the control system design. Finally, the nonlinear evolutionary equilibrium model including 3D volumetric structures CarMa0NL has been used to model nonlinear effects by simulating a "fictitious" linear current quench. The second part involves a modelling activity strictly related to the results of the experimental campaigns. In particular, new linearized models for the experimental plasmas in USN configuration have been carried out for all the plasma regimes involved in the experimental campaign, i.e. from low-β to H-mode. An iterative procedure for the production of accurate linearized plasma response models has been realized in order to handle the experimental data. The new plasma linearized models allowed further investigations on vertical stability, including 3D wall effects, in the three different plasma regimes (i.e. low-β, intermediate-β, H-mode). Furthermore, the axisymmetric plasma linearized models (CREATE-L) have been analyzed in the framework of the control theory revealing peculiar features in terms of associated SISO transfer function for vertical stability control and in terms of full MIMO model for shaping control. The MIMO model has been used to investigate the plasma wall-gaps oscillations experimentally observed in some intermediate-β plasma shots. A non-linear time evolution of the plasma discharge for a low-β plasma has been carried out by using the evolutionary equilibrium code CarMa0NL. Finally, it was investigated the vertical instability for the experimental plasmas in terms of a possible relation between plasma parameters and the occurrence of it; for these purposes, the solution of the inverse plasma equilibrium problem for the production of numerically generated plasma equilibria with variations on the plasma parameters observed experimentally was performed. This involves a wide class of numerical methods that will be described in details. Then, statistical hypothesis test has been adopted to compare the mean values of the parameters of both experimental and numerically generated plasmas showing different behaviours in terms of vertical stability.
La presente tesi tratta la modellazione e il controllo di plasmi in equilibrio, a sezione non circolare e relativi all’esperimento RFX-mod operante come tokamak. L’obiettivo è di sviluppare un modello complessivo di RFX-mod (includendo plasmaconduttori- controllore) con finalità di controllo elettromagnetico del plasma. L’esperimento RFX-mod è stato descritto con modelli caratterizzati da un crescente livello di complessità, coinvolgendo sia dati teorici che sperimentali. Il codice CREATE-L è stato usato per lo sviluppo di modelli linearizzati di risposta di plasma, con ipotesi semplificative sulla rappresentazione delle strutture conduttrici (approssimazione assialsimmetrica). Questi modelli, grazie alla loro semplicità, sono stati utilizzati per la progettazione del sistema di controllo. Il codice CarMa0 è stato usato per sviluppare modelli analoghi ma con una rappresentazione tridimensionale delle strutture conduttrici; questi permettono di verificare l’accuratezza dei modelli semplificati e indagare l’importanza delle strutture tridimensionali sulla dinamica del sistema. Il codice CarMa0NL ha permesso la trattazione di fenomeni evolutivi nel tempo e nonlineari (e.g. disruzioni, transizioni limiter-divertor, transizioni L-H etc.). L’attività può essere suddivisa in due parti: la prima riguarda la modellizzazione di plasmi a basso β teorici, non ottenuti sperimentalmente, usati come riferimento per la progettazione e l’implementazione del sistema di controllo della forma e della posizione verticale del plasma; la seconda parte, è legata ai risultati delle campagne sperimentali sui plasmi a sezione non circolari in diversi regimi, dal basso β al modo H, con particolare attenzione allo sviluppo di un nuovo modello linearizzato di risposta di plasma per i nuovi regimi di equilibrio raggiunti. L’attività di ricerca è caratterizzata da molteplici problematiche e peculiarità sia in termini di modellazione che di controllo. La pronunciata non circolarità della forma di plasma e i diversi regimi coinvolti hanno influenzato fortemente l’attività di modellazione che ha richiesto, infatti, lo sviluppo di molteplici strumenti computazionali e di analisi dati. Per quanto concerne il controllo, la non completa osservabilità della dinamica del sistema e la necessità di ridurre l’ordine del modello sono solo alcuni degli aspetti che hanno determinato la progettazione del sistema di controllo di forma e di posizione verticale. La prima parte è basata su dati teorici generati dal codice di equilibrio MAXFEA e poi utilizzati per derivare il modello linearizzato attraverso il codice CREATE-L. In questo contesto, sono stati prodotti due modelli di riferimento per le configurazioni magnetiche relative a plasmi non circolari: il singolo nullo inferiore (LSN) e il singolo nullo superiore (USN). I modelli CREATE-L sono i più semplici in termini di complessità di modellazione, in quanto le strutture conduttive della macchina sono descritte nell’approssimazione assialsimmetrica. D’altro canto, le proprietà semplici ma affidabili del modello CREATE-L hanno portato alla progettazione del sistema di controllo di forma e posizione verticale del plasma di RFX-mod, che è stato in seguito testato e utilizzato con successo per aumentare le prestazioni del plasma. Successivamente, è stata condotta un’analisi sui possibili effetti 3D delle strutture conduttrici sulle due configurazioni di plasma di riferimento, producendo dunque modelli linearizzati caratterizzati da un sempre maggiore livello di complessità. Una dettagliata descrizione volumetrica (3D) delle strutture conduttrici di RFX-mod è stata eseguita e inclusa nei modelli linearizzati di plasma attraverso il codice CarMa0. Successivamente, è stato eseguito un confronto tra l’accuratezza di questo modello e quello precedente 2D. Le diverse ipotesi e approssimazioni dei vari modelli consentono una chiara identificazione dei fenomeni chiave che governano l’evoluzione dell’instabilità verticale n = 0 in scariche RFX-mod tokamak e quindi forniscono informazioni fondamentali nella pianificazione ed esecuzione di esperimenti correlati oltre che nella raffinazione del progetto del sistema di controllo. Infine, il modello di equilibrio evolutivo non lineare CarMa0NL, che comprende le strutture volumetriche 3D, è stato utilizzato per modellare gli effetti non lineari simulando una variazione di corrente lineare "fittizia". La seconda parte è costituita da un’attività di modellazione strettamente correlata ai risultati delle campagne sperimentali. In particolare, sono stati eseguiti nuovi modelli linearizzati per i plasmi sperimentali nella configurazione USN per tutti i regimi di plasma coinvolti, cioè dal basso β fino al modo H. È stata ideata e sviluppata una procedura iterativa per la produzione di modelli linearizzati di risposta di plasma estremamente accurati, al fine di riprodurre al meglio i dati sperimentali. I nuovi modelli hanno consentito ulteriori studi sulla stabilità verticale, inclusi gli effetti della parete 3D, nei tre diversi regimi studiati (basso β, β intermedio, modo H). I modelli linearizzati assialsimmetrici (CREATE-L) sono stati analizzati dal punto di vista della teoria dei controlli, rilevando caratteristiche peculiari in termini di funzione di trasferimento SISO associata al controllo della stabilità verticale e in termini di modello completo MIMO relativo al controllo di forma. Il modello MIMO è stato utilizzato per indagare le oscillazioni nella forma del plasma osservate sperimentalmente in alcune scariche a β intermedio. L’evoluzione temporale non lineare della scarica di plasma, per plasmi sperimentali a regimi a basso β, è stata effettuata usando il codice di equilibrio evolutivo CarMa0NL. Infine, è stata studiata l’instabilità verticale per i plasmi sperimentali in termini di un possibile rapporto tra i parametri del plasma e il suo verificarsi; a tal fine è stata eseguita la soluzione del problema inverso per la produzione di equilibri di plasma teorici di riferimento, prodotti come variazioni sui parametri dei plasmi osservati sperimentalmente, il che comporta una vasta gamma di metodi numerici descritti in dettaglio. Successivamente, è stato adottato un test di ipotesi statistica per confrontare i valori medi dei parametri di plasma, sia sperimentali che teorici, associati a due diversi comportamenti in termini di stabilità verticale.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Kreikebaum, Frank Karl. "Control of transmission system power flows". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50392.

Texto completo
Resumen
Power flow (PF) control can increase the utilization of the transmission system and connect lower cost generation with load. While PF controllers have demonstrated the ability to realize dynamic PF control for more than 25 years, PF control has been sparsely implemented. This research re-examines PF control in light of the recent development of fractionally-rated PF controllers and the incremental power flow (IPF) control concept. IPF control is the transfer of an incremental quantity of power from a specified source bus to specified destination bus along a specified path without influencing power flows on circuits outside of the path. The objectives of the research are to develop power system operation and planning methods compatible with IPF control, test the technical viability of IPF control, develop transmission planning frameworks leveraging PF and IPF control, develop power system operation and planning tools compatible with PF control, and quantify the impacts of PF and IPF control on multi-decade transmission planning. The results suggest that planning and operation of the power system are feasible with PF controllers and may lead to cost savings. The proposed planning frameworks may incent transmission investment and be compatible with the existing transmission planning process. If the results of the planning tool demonstration scale to the national level, the annual savings in electricity expenditures would be $13 billion per year (2010$). The proposed incremental packetized energy concept may facilitate a reduction in the environmental impact of energy consumption and lead to additional cost savings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Swärdh, Jan-Erik. "Commuting time choice and the value of travel time". Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro universitet, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-8524.

Texto completo
Resumen
In the modern industrialized society, a long commuting time is becoming more and more common. However, commuting results in a number of different costs, for example, external costs such as congestion and pollution as well as internal costs such as individual time consumption. On the other hand, increased commuting opportunities offer welfare gains, for example via larger local labor markets. The length of the commute that is acceptable to the workers is determined by the workers' preferences and the compensation opportunities in the labor market. In this thesis the value of travel time or commuting time changes, has been empirically analyzed in four self-contained essays. First, a large set of register data on the Swedish labor market is used to analyze the commuting time changes that follow residential relocations and job relocations. The average commuting time is longer after relocation than before, regardless of the type of relocation. The commuting time change after relocation is found to differ substantially with socio-economic characteristics and these effects also depend on where the distribution of commuting time changes is evaluated. The same data set is used in the second essay to estimate the value of commuting time (VOCT). Here, VOCT is estimated as the trade-off between wage and commuting time, based on the effects wage and commuting time have on the probability of changing jobs. The estimated VOCT is found to be relatively large, in fact about 1.8 times the net wage rate. In the third essay, the VOCT is estimated on a different type of data, namely data from a stated preference survey. Spouses of two-earner households are asked to individually make trade-offs between commuting time and wage. The subjects are making choices both with regard to their own commuting time and wage only, as well as when both their own commuting time and wage and their spouse's commuting time and wage are simultaneously changed. The results show relatively high VOCT compared to other studies. Also, there is a tendency for both spouses to value the commuting time of the wife highest. Finally, the presence of hypothetical bias in a value of time experiment without scheduling constraints is tested. The results show a positive but not significant hypothetical bias. By taking preference certainty into account, positive hypothetical bias is found for the non-certain subjects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Alves, Guilherme de Oliveira. "Uma nova metodologia para estimação de estados em sistemas de distribuição radiais utilizando PMUs". Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/1528.

Texto completo
Resumen
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-05-16T17:51:25Z No. of bitstreams: 1 guilhermedeoliveiraalves.pdf: 1293169 bytes, checksum: a76074780b2af177b66be7c6435b16d1 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-06-28T12:25:31Z (GMT) No. of bitstreams: 1 guilhermedeoliveiraalves.pdf: 1293169 bytes, checksum: a76074780b2af177b66be7c6435b16d1 (MD5)
Made available in DSpace on 2016-06-28T12:25:31Z (GMT). No. of bitstreams: 1 guilhermedeoliveiraalves.pdf: 1293169 bytes, checksum: a76074780b2af177b66be7c6435b16d1 (MD5) Previous issue date: 2015-09-18
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O presente trabalho tem por objetivo apresentar uma nova metodologia para estimação estática de estados em sistemas de distribuição de energia elétrica que estima as correntes nos ramos como variáveis de estado utilizando medições de tensão e corrente de ramo fasoriais oriundas de unidades de medição fasorial (Phasor Measurement Units - PMUs). A metodologia consiste em resolver um problema de otimização não linear minimizando uma função objetivo quadrática associada com as medições e estados estimados sujeito às restrições de carga das barras da rede que não apresentam PMUs instaladas baseadas em dados históricos, sendo esta a principal contribuição deste trabalho. Uma proposta de alocação de PMUs também é apresentada e que consiste em alocar duas unidades em cada ramificação do sistema, uma no começo e outra no final do trecho, procurando utilizar o menor número possível e que não comprometa a qualidade dos estados estimados. A resolução do problema de otimização é realizada de duas formas, através da ‘toolbox fmincon’ do software Matlab, que é uma ferramenta muito utilizada na resolução de problemas de otimização, e através da implementação computacional do Método de Pontos Interiores com Barreira de Segurança (Safety Barrier Interior Point Method - SFTB - IPM) proposto na literatura utilizada. Durante o processo de estimação de estados são utilizadas medidas obtidas através de um fluxo de potência que simulam as PMUs instaladas nos sistemas analisados variando o carregamento de cada sistema em torno da sua média histórica de carga até atingir os limites superior e inferior estabelecidos, sendo verificado o comportamento do estimador de estados perante a ocorrência de ruídos brancos nas medidas de todos os sistemas analisados. Foram analisados um sistema de distribuição tutorial de 15 barras e três sistemas encontrados na literatura contendo 33, 50 e 70 barras respectivamente. No sistema tutorial e no de 70 barras foram incluídas unidades de geração distribuída para se verificar o comportamento do estimador de estados. Todos os resultados do processo de estimação de estados são obtidos com os dois métodos de resolução apresentados e são comparados o desempenho de cada método, principalmente em relação ao tempo computacional. Todos os resultados obtidos foram validados usando um programa de fluxo de potência convencional e apresentam boa precisão com valor de função objetivo baixo mesmo na presença de ruídos nas medidas refletindo de maneira confiável o real estado do sistema de distribuição, o que torna a metodologia proposta atraente.
This work aims at presenting a new methodology for static state estimation in electric power distribution systems which estimates the branch currents as state variables using voltage measurements and current phasor branch obtained from phasor measurement units (Phasor Measurement Units - PMUs). The methodology consists of solving a nonlinear optimization problem minimizing a quadratic objective function associated with the estimated measurements and states, subject to load constraints for the non monitored loads based on historical data, which is the main contribution of this work. A PMU allocation strategy is presented which consists of allocating two PMUs for each system branch, one at the beginning and another at the end, trying to use as little PMUs as possible in such a way that the quality of the estimated states are not compromised. The solution of the optimization problem is obtained through two ways, the first is the toolbox ‘fmincon’ from Matlab solver software which is a widely used tool in the optimization problem. The second is a computer implementation of interior point method with security barrier (SFTB - IPM) proposed in the literature. Comparisons of computing times and results obtained with both methods are shown. A power flow program is used to obtain the voltages and branch currents in order to emulate the PMUs data in the state estimation process. Additionaly the non monitored loads are varied from the minimum bounds to their maximum, allowing white noise errors from the PMUs measurements. A tutorial test system of 15 buses is fully explored and three IEEE test systems of 33, 50 and 70 buses are used to show the effectiveness of the proposed methodology. For the tutorial and 70 bus systems, distribued generation units were included to see the state estimator behavior. All results from the state estimation process are obtained considering the two presented solving methods and the computing times performance compared. The results obtained were validated using a conventional power flow program and have good accuracy with low objective function value even in the presence of white noise errors in the measurements reflecting the reliability of the proposed methodology, making it very attractive for distribution system monitoring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

SINGH, VIKAS. "STATIC AVAILABLE TRANSFER CAPABILITY ENHANCEMENT USING ‘UPFC’ UNDER CONTINGENCIES". Thesis, 2012. http://dspace.dtu.ac.in:8080/jspui/handle/repository/13957.

Texto completo
Resumen
M.TECH
A combination of lack of investment and environmental issues results in lack of building of new transmission infrastructure. This leads to a requirement for better utilization of existing transmission network. The ATC is defined as a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses" This index is often used as a measure of additional power that can be securely transferred by a transmission network. The Available Transfer Capability (ATC) depends on a number of factors such as system generation dispatch, system load level, load distribution in the network, power transfer between areas and the limit imposed on the transmission network due to thermal, voltage and stability considerations. The computation of ATC is very important to the transmission system security and market forecasting. While the power marketers are focusing on fully utilizing the transmission system, engineers are concern with the transmission system security as any power transfers over the limit might result in system instability With development of power market, bilateral trade increase greatly. The higher and higher challenges to the reliable and economic operation of power grid have been posed. While Flexible AC Transmission System (FACTS) is for a new solving method, it brings an unprecedented turning point in power flow control, stability and transmission capacity and improves power system. Placement of FACTS controller may be quite effective for enhancing The ATC of power system due to their capability to improve line voltage and control power flow through lines. Out of all the FACTS devices, placement of Unified Power Flow Controller (UPFC) seems to the more effective in enhancing available transfer capability of the transmission network due to its ability to control series and shunt parameters simultaneously.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Chang-ChihLiu y 劉昶志. "Static Task Scheduling for Heterogeneous Distributed Computing Systems with Memory Constraints". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/11115762411488195551.

Texto completo
Resumen
碩士
國立成功大學
資訊工程學系碩博士班
98
Effective task scheduling of parallel applications represented by directed acyclic graph (DAG) is critical for obtaining high performance in heterogeneous distributed computing systems (HDCSs). As the problem of finding optimal scheduling algorithm has been shown to be NP-complete in general cases, many heuristic algorithms for scheduling on HDCSs have been proposed recently. However, none of them consider the case where processing elements (PEs) have memory constraints which prevent operating system from being installed on PEs to provide memory management. Tasks have to be stored on a specific physical memory address and loaded on demand, which means that task schedule in such computing systems requires the consideration of the loading time of tasks associated with their code size besides the heterogeneity of PEs and the inter-PE communication overhead. For identifying different code size of tasks, the nodes of DAG are colored with different color types according to their functionalities, called colored-DAG. This thesis presents a new static list-based scheduling algorithm, called the Heterogeneous Loading Time Aware (HLTA) algorithm, which has three distinctive features. First, the algorithm partitions the priority list by layer heuristic of colored-DAG. Second, the algorithm uses novel greedy mechanism to reorder and schedule the identical color of nodes to the same PE as could as possible. Finally, a braking mechanism is used to refine the greedy reordering when scheduling a high color ratio colored-DAG. The comparison study, based on randomly generated graphs, shows that HLTA algorithm significantly surpasses the Heterogeneous Earliest-Finish-Time (HEFT) algorithm in terms of quality of schedules and average schedule length ratio improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Liu, Yen-Ching y 劉彥慶. "Effects of Static Biaxial Mechanical Constraints on Mechanical Properties of Planar Engineered Tissues". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/07793834824951282056.

Texto completo
Resumen
碩士
國立交通大學
機械工程學系
100
Prior studies indicated that mechanical loading influences turnover of cell and extracellular matrix in tissues. We used fibroblast–seeded collagen gels as a model to study mechano-biological responses under defined biaxial mechanical loading. Under defined biaxially of static mechanical conditions, cell-seeded collagen gels show irreversible micro-structural change after six days in culture. We thus wanted to verify that the previous mechanical constraint can also lead to a significant mechanical anisotropy that correlated to the fiber alignment and establish a link between the two. The mechanical anisotropy showed in this study appeared to be related to the fiber alignment by controlled tension extension test demonstrated in the previous study using nonlinear optical microscopy. The variations in mechanical anisotropy may be due to slightly different conditions of the gels (e.g., cell density and cell activity) or culturing environments. We will examine the influence of cyclic biaxial stretching on these gels in the near future, which likely will lead to better understanding of the role of mechanical forces on tissue development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Hung, Syuan Kai y 洪璿凱. "Moving Object Detection and Tracking Using Binocular Vision Based on Spatial Constraints of Static Environment". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/60285200837621829570.

Texto completo
Resumen
碩士
淡江大學
機械與機電工程學系碩士班
99
This thesis presents a visual simultaneous localization, mapping and moving object tracking (SLAMMOT) based on extended Kalman filter (EKF). First, we use the geometric constraints of static landmarks in three-dimensional space to design the algorithms of data association and map management. Since these algorithms are independent of the EKF estimator, the SLAMMOT system can recover from the problem of robot kidnapped automatically. Second, we use the same geometric constraints to develop the algorithm for moving object detection. The developed algorithms are integrated with the EKF estimator to carry out the experiments of SLAMMOT tasks in indoor environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Nagaraj, B. P. "Kinematic And Static Analysis Of Over-Constrained Mechanisms And Deployable Pantograph Masts". Thesis, 2009. https://etd.iisc.ac.in/handle/2005/1014.

Texto completo
Resumen
Foldable and deployable space structures refer to a broad category of pre-fabricated structures that can be transformed from a compact folded configuration to a predetermined expanded configuration. Such deployable structures are stable and can carry loads. These structures are also mechanisms with one degree of freedom in their entire transformation stages whether in the initial folded form or in the final expanded configuration. Usually, pantograph mechanisms or a scissor-like elements (SLEs) are part of such deployable structures. A new analysis tool to study kinematic and static analyses of foldable and deployable space structures /mechanisms, containing SLEs, has been developed in this thesis. The Cartesian coordinates are used to study the kinematics of large deployable structures. For many deployable structures the degree of freedom derived using the standard Grubler-Kutzback criteria, is found to be less than one even though the deployable structure /mechanism can clearly move. In this work the dimension of nullspace of the derivatives of the constraint equations are used to obtain the correct degrees of freedom of deployable structure. A numerical algorithm has been developed to identify the redundant joints /links in the deployable structure /mast which results in the incorrect degrees of freedom obtained by using the Grubler-Kutzback criteria. The effectiveness of the algorithm has been illustrated with several examples consisting of triangular, box shaped SLE mast and an eighteen-sided SLE ring with revolute joints. Further more the constraint Jacobian matrix is also used to evaluate the global degrees of freedom of deployable masts/structures. Closed-form kinematic solutions have been obtained for the triangular and box type masts and finally, as a generalization, extended to a general n-sided SLE based ring structure. The constraint Jacobian matrix based approach has also been extended to obtain the load carrying characteristics of deployable structures with SLEs in terms of deriving the stiffness matrix of the structure. The stiffness matrix has been obtained in the symbolic form and it matches results obtained from other commonly used techniques such as force and displacement methods. It is shown that the approach developed in this thesis is applicable for all types of practical masts with revolute joints where the revolute joint constraints are made to satisfy through the method of Lagrange multipliers and a penalty formulation. To demonstrate the effectiveness of the new method, the procedure is applied to solving (i) a simple hexagonal SLE mast, and (ii) a complex assembly of four hexagonal masts and the results are presented. In summary, a complete analysis tool to study masts with SLEs has been developed. It is shown that the new tool is effective in evaluating the redundant links /joints there by over coming the problems associated with the well –known Grubler-Kutzback criteria. Closed-form kinematic solutions of triangular and box SLE masts as well as a general n-sided SLE ring with revolute joints has been obtained. Finally, the constraint Jacobian based method is used to evaluate the stiffness matrix for the SLE masts. The theory and algorithms presented in this thesis can be extended to masts of different shapes and for the stacked masts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Nagaraj, B. P. "Kinematic And Static Analysis Of Over-Constrained Mechanisms And Deployable Pantograph Masts". Thesis, 2009. http://hdl.handle.net/2005/1014.

Texto completo
Resumen
Foldable and deployable space structures refer to a broad category of pre-fabricated structures that can be transformed from a compact folded configuration to a predetermined expanded configuration. Such deployable structures are stable and can carry loads. These structures are also mechanisms with one degree of freedom in their entire transformation stages whether in the initial folded form or in the final expanded configuration. Usually, pantograph mechanisms or a scissor-like elements (SLEs) are part of such deployable structures. A new analysis tool to study kinematic and static analyses of foldable and deployable space structures /mechanisms, containing SLEs, has been developed in this thesis. The Cartesian coordinates are used to study the kinematics of large deployable structures. For many deployable structures the degree of freedom derived using the standard Grubler-Kutzback criteria, is found to be less than one even though the deployable structure /mechanism can clearly move. In this work the dimension of nullspace of the derivatives of the constraint equations are used to obtain the correct degrees of freedom of deployable structure. A numerical algorithm has been developed to identify the redundant joints /links in the deployable structure /mast which results in the incorrect degrees of freedom obtained by using the Grubler-Kutzback criteria. The effectiveness of the algorithm has been illustrated with several examples consisting of triangular, box shaped SLE mast and an eighteen-sided SLE ring with revolute joints. Further more the constraint Jacobian matrix is also used to evaluate the global degrees of freedom of deployable masts/structures. Closed-form kinematic solutions have been obtained for the triangular and box type masts and finally, as a generalization, extended to a general n-sided SLE based ring structure. The constraint Jacobian matrix based approach has also been extended to obtain the load carrying characteristics of deployable structures with SLEs in terms of deriving the stiffness matrix of the structure. The stiffness matrix has been obtained in the symbolic form and it matches results obtained from other commonly used techniques such as force and displacement methods. It is shown that the approach developed in this thesis is applicable for all types of practical masts with revolute joints where the revolute joint constraints are made to satisfy through the method of Lagrange multipliers and a penalty formulation. To demonstrate the effectiveness of the new method, the procedure is applied to solving (i) a simple hexagonal SLE mast, and (ii) a complex assembly of four hexagonal masts and the results are presented. In summary, a complete analysis tool to study masts with SLEs has been developed. It is shown that the new tool is effective in evaluating the redundant links /joints there by over coming the problems associated with the well –known Grubler-Kutzback criteria. Closed-form kinematic solutions of triangular and box SLE masts as well as a general n-sided SLE ring with revolute joints has been obtained. Finally, the constraint Jacobian based method is used to evaluate the stiffness matrix for the SLE masts. The theory and algorithms presented in this thesis can be extended to masts of different shapes and for the stacked masts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Borgs, Stephanie Pamela. "Determining the Effects of Force Intensity, Postural and Force Direction Constraints on Off-Axis Force Production during Static Unilateral Pushing and Pulling Manual Exertions". Thesis, 2013. http://hdl.handle.net/10012/7780.

Texto completo
Resumen
Proactive ergonomics is generally considered to be a more efficient and cost effective way of designing working environments than reactive ergonomics. It often requires preemptively selecting working postures and forces to reduce potential injury risk. One major issue with proactive ergonomic design is correctly identifying the true manual forces that will be required of a worker to complete defined tasks. Typically, these forces are represented as in direct opposition to the forces required by a particular task. However, this is likely an oversimplification as forces often act in different directions than the task-required direction to increase required force level, enhance balance and reduce joint moments, depending on specific experimental conditions. This study aims to quantify these off-axis forces as they change with different required on-axis force intensities. This thesis evaluated the effects of force intensity on the presence of off-axis forces across four conditions, which included free and constrained postures, and with and without off-axis force. Eighteen female subjects performed static, unilateral, manual pushing and pulling exertions while seated and were limited to force contributions from the right upper extremity. Hand forces and location of bony landmarks were collected from each subject and force intensity consisted of both maximal and submaximal levels (5% to 50% of the maximum producible on-axis force in increments of 5%). All principle direction forces were scaled to the on-axis force level and anatomically relevant joint moments scaled to the maximum capacity joint moment. The main objective of this study was to analyze off-axis force production as force intensity was increased under various constraint conditions. The highest maximum on-axis force was in the fully free condition (off-axis force allowed and posture unconstrained) and as conditions became more constrained for both pushing and pulling exertions, maximum on-axis force production decreased (p=<0.0001). For submaximal exertions in the free posture, participants used off-axis forces to target the shoulder flexion-extension moment by pushing increasingly upwards (p=0.0122) and to the left by 5.6% on-axis (p=0.0025), and by pulling 12.6% on-axis downward (p=<0.0001) and 4.7% on-axis rightward (p=0.0024) compared to when off-axis force was not allowed. When comparing the free to the constrained posture while allowing off-axis force, participants pushed downwards instead of upwards by a difference of 12.9% on-axis (p=0.0002) and pulled less downward (becoming slightly upward) by an increasing difference (p=0.0002) and from decreasing to increasing rightward (p=0.0006). These changes in off-axis force showed a unifying strategy of using less shoulder flexion-extension strength by targeting wrist and elbow moments for pushing and pulling exertions. When in the constrained posture allowing and not allowing off-axis force resulted in more internal elbow flexion (p=0.0003) moment during pushing, and less internal shoulder flexion (p=0.0092), more internal shoulder adduction (p=0.0252), more to less internal elbow supination (p=0.0415), and increasingly less internal wrist flexion (p=0.0296) moments during pulling, which verified previously observed strategies. Finally, for both maximal and submaximal exertions, pulling was more sensitive to changes in off-axis forces compared to pushing which was more sensitive to postural flexibility. In conclusion, the underlying principles as to how and why off-axis forces change provides valuable knowledge to ergonomists so that they can more accurately predict force production in workplace design, ultimately reducing the potential for injury.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

NIKOLIC, Durica. "A General Framework for Constraint-Based Static Analyses of Java Bytecode Programs". Doctoral thesis, 2013. http://hdl.handle.net/11562/546351.

Texto completo
Resumen
Questa tesi introduce un generico e parametrizzato framework per analisi statica dei programmi Java bytecode, basato sulla generazione e soluzione dei vincoli. All'interno del framework è possibile gestire sia i flussi di eccezione all'interno di programmi analizzati, sia i side-effect indotti dalle esecuzioni dei metodi che possono modificare la memoria. Questo framework è generico nel senso che diverse istanziazioni dei suoi parametri risultano in diverse analisi statiche capaci di catturare varie proprietà relative alla memoria delle variabili del programma ad ogni punto del programma. Le analisi statiche definite dal framework sono basate su interpretazione astratta, e quindi le proprietà d'interesse sono rappresentate da dei domini astratti. Il framework può essere usato per la definizione sia delle analisi statiche che producono le approssimazioni del tipo "possible" oppure "may", che quelle del tipo "definite" oppure "must". Nel primo caso, il risultato di tali analisi è una sovra-approssimazione di quello che potrebbe essere vero ad un certo punto del programma, mentre nel secondo caso il risultato rappresenta una sotto-approssimazione della situazione reale. Questa tesi fornisce un insieme di condizioni che diverse istanziazioni dei parametri del framework devono soddisfare affinché le analisi statiche definite all'interno del framework siano "sound" (corrette). Quando i parametri istanziati soddisfano tali condizioni, il framework garantisce la correttezza dell'analisi corrispondente all'istanziazione in questione. Il vantaggio di questo approccio è che il designer di una nuova analisi statica deve soltanto mostrare che i parametri da lui istanziati soddisfano i criteri specificati dal framework.In questo modo la dimostrazione di correttezza dell'analisi completa è semplificata. Questa è una caratteristica molto importante del presente lavoro. La tesi introduce due nuove analisi statiche relatve alle proprietà della memoria: la Possible Reachability Analysis Between Program Variables e la Definite Expression Aliasing Analysis. La prima rappresenta un esempio delle analisi "possible" e determina, per ogni punto p del programma, quali sono le coppie ordinate delle variabili disponibili a tale punto, tali che v potrebbe raggiungere w al punto p, ovvero, che a partire dalla variabile v è possibile seguire un insieme di locazioni di memoria che portano all'oggetto legato alla variabile w. La seconda analisi è un esempio delle analisi "definite" e determina, per ogni punto p del programma ed ogni variabile v disponibile a tale punto, un insieme di espressioni il cui valore è sempre uguale al valore che la variabile v può avere al punto p, per ogni possibile esecuzione. Entrambe le analisi sono state formalizzate e dimostrate corrette grazie ai risultati teorici del framework introdotto in questa tesi. In più, entrambe le analisi sono state implementate all'interno dell'analizzatore statico per Java e Android chiamato Julia (www.juliasoft.com). Gli esperimenti eseguiti sui programmi reali mostrano che la precisione dei principali tool di Julia (nullness e termination tool) è migliorata rispetto alle versioni precedenti di Julia nelle quali le nuove analisi non erano presenti.
The present thesis introduces a generic parameterized framework for static analysis of Java bytecode programs, based on constraint generation and solving. This framework is able to deal with the exceptional flows inside the program and the side-effects induced by calls to non-pure methods. It is generic in the sense that different instantiations of its parameters give rise to different static analyses which might capture complex memory-related properties at each program point. Different properties of interest are represented as abstract domains, and therefore the static analyses defined inside the framework are abstract interpretation-based. The framework can be used to generate possible or may approximations of the property of interest, as well as definite or must approximations of that property. In the former case, the result of the static analysis is an over-approximation of what might be true at a given program point; in the latter, it is an under-approximation. This thesis provides a set of conditions that different instantiations of framework's parameters must satisfy in order to have a sound static analysis. When these conditions are satisfied by a parameter's instantiation, the framework guarantees that the corresponding static analysis is sound. It means that the designer of a novel static analysis should only show that the parameters he or she instantiated actually satisfy the conditions provided by the framework. This way the framework simplifies the proofs of soundness of the static analysis: instead of showing that the overall analysis is sound, it is enough to show that the provided instantiation describing the actual static analyses satisfies the conditions mentioned above. This a very important feature of the present approach. Then the thesis introduces two novel static analyses dealing with memory-related properties: the Possible Reachability Analysis Between Program Variables and the Definite Expression Aliasing Analysis. The former analysis is an example of a possible analysis which determines, for each program point p, which are the ordered pairs of variables available at p, such that v might reach w at p, i.e., such that starting from v it is possible to follow a path of memory locations that leads to the object bound to w. The latter analysis is an example of a definite analysis, and it determines, for each program point p and each variable v available at that point, a set of expressions which are always aliased to v at p. Both analyses have been formalized and proved sound by using the theoretical results of the framework. These analyses have been also implemented inside the Julia tool (www.juliasoft.com), which is a static analyzer for Java and Android. Experimental evaluation of these analyses on real-life benchmarks shows how the precision of Julia's principal checkers (nullness and termination checkers) increased compared to the previous version of Julia where these two analyses were not implemented. Moreover, this experimental evaluation showed that the presence of the reachability analysis actually decreased the total run-time of Julia. On the other hand, the aliasing analysis takes more time, but the number of possible warnings produced by the principal checkers drastically decreased.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Rahul, R. "Low delay file transmissions over power constrained quasi-static fading channels". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5896.

Texto completo
Resumen
The ubiquitous deployment of battery-operated wireless devices has resulted in the need for efficient low latency power allocation schemes. A common phenomenon in wireless transmission systems is congestion, where the transmitter backlog grows due to restrictions in channel usage on a resource-constrained shared access medium. In this research work, we aim to achieve low communication delay of wireless downlink file transmissions operating on power-constrained quasi-static fading channels, using state-dependent transmission rate control and admission of file transmission requests. We employ a Markov queueing model to formulate the low delay objective for exponentially distributed file sizes as a constrained average queue length minimization problem. The corresponding primal problem is known to be expressible as a linear program in occupation measures, and therefore strong duality holds. In our work, we show the primal feasibility of the dual optimal policy w.r.t. the average throughput and power constraints, which is proved under the assumption the optimal average power and throughput are continuous with respect to the Lagrange dual variables at the optimal point. The dual problem is simplified to an iterative optimization using Dinkelbach’s fractional programming method and solved using gradient analysis techniques to analytically derive the ON-OFF threshold characteristics of the admission policy and the recursive structure of the transmission rate policy. We first apply our solution method to a wireless transmission system using the M/M/1 queueing model. Our objective is to minimize the average queue length subject to an upper bound on average transmission power and a lower bound on average admission rate. This constrained average queue length minimization problem is solved using Lagrange dual method. We substitute the individual stationary probabilities in the Lagrange dual function using the product form distribution expressed in terms of the stationary probability of the maximum queue length. The resulting objective function then corresponds to a fractional minimization problem which is solved using Dinkelbach’s method. We analytically derive the ON-OFF threshold characteristic of the optimal admission rates and the recursive structure of the optimal transmission rates. We illustrate the results of our algorithm for different values of throughput and power requirements. We also demonstrate the efficiency of optimal state-dependent rate control for exponentially distributed file sizes compared to benchmark state-independent transmission schemes. We next apply the solution techniques to an energy harvesting wireless transmission system, extending the M/M/1 queueing model. The model uses energy stored in a battery as well as energy packets available from an auxiliary power supply for file transmission. We use the product-form stationary distribution to establish a correspondence between the energy harvesting system and the M/M/1 queueing system. Using the solution approach using Dinkelbach’s method, we derive similar characteristics for the optimal admission and transmission rates. We finally extend the analysis to model a cache-aided wireless transmission system operating under the assumption the cache-hit probability is uniform for all files and queue length states. The system is modeled as a quasi-one-dimensional Markov chain. The stationary probabilities in the Lagrange dual function are expressed in terms of the stationary probability of the empty buffer state using the product of matrices. The solution methods and insights developed from the previous models simplify the analysis of this problem, and we analytically characterize the structure of the optimal admission and transmission rates. The applicability of our solution methodology to these three models of transmission systems illustrates its simplicity and versatility.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Dunn, Brian P. "Delay constrained multimedia communications comparing source-channel approaches for quasi-static fading channels /". 2005. http://etd.nd.edu/ETD-db/theses/available/etd-07222005-170238/.

Texto completo
Resumen
Thesis (M.S.E.E.)--University of Notre Dame, 2005.
Thesis directed by J. Nicholas Laneman for the Department of Electrical Engineering. "August 2005." Includes bibliographical references (leaves 69-71).
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

"Impacts of Base-Case and Post-Contingency Constraint Relaxations on Static and Dynamic Operational Security". Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.38387.

Texto completo
Resumen
abstract: Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models. Various scenarios were investigated to capture and compare between the impacts of base-case and post-contingency relaxations on real-time system performance, including the presence of both relaxations simultaneously. The effect of penalty prices on the number and magnitude of relaxations was investigated as well.
Dissertation/Thesis
Doctoral Dissertation Engineering 2016
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Tang, Yi y 唐毅. "Solving Static Bike Rebalancing Problem by a Partial Demand Fulfilling Capacity Constrained Clustering Algorithm". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/u5xh3w.

Texto completo
Resumen
碩士
國立臺灣科技大學
資訊工程系
106
Nowadays, bike sharing systems have been widely used in major cities around the world. One of the major challenges of bike sharing systems is to rebalance the number of bikes for each station such that user demands can be satisfied as much as possible. To execute rebalancing operations, operators usually have a fleet of vehicles to be routed through stations. When rebalancing operations are executing at nighttime, user demands usually are small enough to be ignored and this is regarded as the static bike rebalancing problem. In this paper, we propose a Partial Demand Fulfilling Capacity Constrained Clustering (PDF3C) algorithm to reduce the problem scale of the static bike rebalancing problem. The proposed PDF3C algorithm can discover outlier stations and group remaining stations into several clusters where stations having large demands can be included by different clusters. Finally, the clustering result will be applied to multi-vehicle route optimization. Experiment results verified that our PDF3C algorithm outperforms existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Kim, Gunsik. "Clinton and Bush administrations' nuclear non-proliferation policies on North Korea challenges and implications of systemic and domestic constraints /". 2005. http://catalog.hathitrust.org/api/volumes/oclc/165148098.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Gray, Jason. "ARE MEASUREMENTS OF HIP EXTENSION AND ANTERIOR PELVIC TILT TAKEN FROM STATIC PHOTOGRAPGHS DURING A CONSTRAINED FORWARD LUNGE TEST VALID AND RELIABLE IN HEALTHY ADULT RUNNERS?" 2011. http://hdl.handle.net/10222/14240.

Texto completo
Resumen
The aim of this study was to determine the concurrent validity, test-retest intra-rater reliability, and test-retest inter-rater reliability of photographic measures of anterior pelvic tilt range of motion (APT ROM) and hip extension ROM range of motion (HE ROM) during a constrained forward lunge test (CFLT) in healthy adult runners. Measurements of start, end, and range of motion (ROM) variables for APT and HE motion were taken from an Optorak kinematic measurement system and from printed photographs extracted from digital video footage using a protractor. A total of 13 healthy adult male and female recreational runners participated in the study. Measures of APT ROM and HE ROM were found to be valid compared to Optorak measures, with intraclass correlation coefficients (ICC) of 0.94 and 0.99 respectively, and limits of agreement of -1.42 ± 1.99 degrees and 0.41 ± 2.13 degrees respectively. APT ROM and HE ROM demonstrated high between-day intra-rater reliability with ICCs ranging from 0.75 to 0.91 and within-day inter-rater reliability with ICCs ranging from 0.86-0.90. For between day intra-rater measurements smallest detectable differences (SDDs) ranged from 5.59 to 4.12 for APT ROM and from 9.08 to 11.08 for HE ROM. The present study suggests that photographic measurements of APT ROM and HE ROM during a CFLT are valid and reliable in healthy adult runners; however, these measurements display a low sensitivity with respect to detecting changes between trials.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

"The institutional constraints of turnaround in East Asia". 2001. http://library.cuhk.edu.hk/record=b5890752.

Texto completo
Resumen
Chan, Eunice Shan.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 108-119).
Abstracts in English and Chinese.
ABSTRACT --- p.i
CHINESE ABSTRACT --- p.ii
ACKNOWLEDGMENTS --- p.iii
TABLE OF CONTENTS --- p.iv
LIST OF TABLES --- p.v
LIST OF FIGURES --- p.vi
CHAPTERS
Chapter 1. --- INTRODUCTION --- p.1
Chapter 2. --- LITERATURE REVIEW --- p.5
Definitions of Turnaround --- p.5
Causes of Firm Decline --- p.6
Severity of the Situation --- p.8
A Western Perspective on Turnaround Responses --- p.10
Turnaround Success --- p.20
Turnaround in the Non-U. S. Contexts --- p.21
Chapter 3. --- THEORETICAL FRAMEWORK AND HYPOTHESES --- p.23
Organizing Framework --- p.23
Institutions and Their Impact on Turnaround --- p.26
Institutional Environment in East Asia and the West --- p.32
Hypotheses --- p.44
Chapter 4. --- METHODOLOGY --- p.54
Research Design --- p.54
Quantitative Methods --- p.55
Qualitative Methods --- p.62
Chapter 5. --- RESULTS --- p.65
Quantitative Results --- p.65
Qualitative Evidence --- p.79
Chapter 6. --- DISCUSSION AND CONCLUSION --- p.97
Implications --- p.98
Limitations and Future Research --- p.102
Conclusion --- p.104
REFERENCES --- p.108
APPENDIX 1: INTERVIEW PROTOCOL --- p.120
APPENDIX 2: ANALYSIS OF FIRMS WITH NON-ETHNIC CHINESE PRINCIPALS REMOVED --- p.121
APPENDIX 3: ANALYSIS OF FIRMS WITH LOW Z-SCORES --- p.123
APPENDIX 4: ANALYSIS OF FIRM SIZE --- p.126
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía