Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Programming functions.

Dissertationen zum Thema „Programming functions“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Programming functions" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Christiansen, Jan [Verfasser]. „Investigating Minimally Strict Functions in Functional Programming / Jan Christiansen“. Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1024079805/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sharifi, Mokhtarian Faranak. „Mathematical programming with LFS functions“. Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=56762.

Der volle Inhalt der Quelle
Annotation:
Differentiable functions with a locally flat surface (LFS) have been recently introduced and studied in convex optimization. Here we extend this motion in two directions: to non-smooth convex and smooth generalized convex functions. An important feature of these functions is that the Karush-Kuhn-Tucker condition is both necessary and sufficient for optimality. Then we use the properties of linear LFS functions and basic point-to-set topology to study the "inverse" programming problem. In this problem, a feasible, but nonoptimal, point is made optimal by stable perturbations of the parameters. The results are applied to a case study in optimal production planning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Trujillo-Cortez, Refugio. „LFS functions in stable bilevel programming“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37171.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ahluwalia, Manu. „Co-evolving functions in genetic programming“. Thesis, University of the West of England, Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322427.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Stark, Ian David Bede. „Names and higher-order functions“. Thesis, University of Cambridge, 1994. https://www.repository.cam.ac.uk/handle/1810/251879.

Der volle Inhalt der Quelle
Annotation:
Many functional programming languages rely on the elimination of 'impure' features: assignment to variables, exceptions and even input/output. But some of these are genuinely useful, and it is of real interest to establish how they can be reintroducted in a controlled way. This dissertation looks in detail at one example of this: the addition to a functional language of dynamically generated names. Names are created fresh, they can be compared with each other and passed around, but that is all. As a very basic example of state, they capture the graduation between private and public, local and global, by their interaction with higher-order functions. The vehicle for this study is the nu-calculus, an extension of the simply-typed lambdacalculus. The nu-calculus is equivalent to a certain fragment of Standard ML, omitting side-effects, exceptions, datatypes and recursion. Even without all these features, the interaction of name creation with higher-order functions can be complex and subtle. Various operational and denotational methods for reasoning about the nu-calculus are developed. These include a computational metalanguage in the style of Moggi, which distinguishes in the type system between values and computations. This leads to categorical models that use a strong monad, and examples are devised based on functor categories. The idea of logical relations is used to derive powerful reasoning methods that capture some of the distinction between private and public names. These techniques are shown to be complete for establishing contextual equivalence between first-order expressions; they are also used to construct a correspondingly abstract categorical model. All the work with the nu-calculus extends cleanly to Reduced ML, a larger language that introduces integer references: mutable storage cells that are dynamically allocated. It turns out that the step up is quite simple, and both the computational metalanguage and the sample categorical models can be reused.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shapiro, David. „Compiling Evaluable Functions in the Godel Programming Language“. PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5101.

Der volle Inhalt der Quelle
Annotation:
We present an extension of the Godel logic programming language code generator which compiles user-defined functions. These functions may be used as arguments in predicate or goal clauses. They are defined in extended Godel as rewrite rules. A translation scheme is introduced to convert function definitions into predicate clauses for compilation. This translation scheme and the compilation of functional arguments both employ leftmost-innermost narrowing. As function declarations are indistinguishable from constructor declarations, a function detection method is implemented. The ultimate goal of this research is the implementation of extended Godel using needed narrowing. The work presented here is an intermediate step in creating a functional-logic language which expands the expressiveness of logic programming and streamlines its execution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Edwards, Teresa Dawn. „The box method for minimizing strictly convex functions over convex sets“. Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/30690.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chen, Jein-Shan. „Merit functions and nonsmooth functions for the second-order cone complementarity problem /“. Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5782.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ferris, Michael Charles. „Weak sharp minima and penalty functions in mathematical programming“. Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292969.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Schanzer, Emmanuel Tanenbaum. „Algebraic Functions, Computer Programming, and the Challenge of Transfer“. Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:16461037.

Der volle Inhalt der Quelle
Annotation:
Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems. A common refrain about word problems is that “the equations are easy to solve - the hard part is setting them up!” A student of algebra is asked to identify functional relationships in the world around them - to set up the equations that describe a system- and to reason about these relationships. Functions, in essence, mark the shift from computing answers to solving problems. Researchers have called for this shift to accompany a change in pedagogy, and have looked to computer programming and game design as a means to combine mathematical rigor with creative inquiry. Many studies have explored the impact of teaching students to program, with the goal of having them transfer what they have learned back into traditional mathematics. While some of these studies have shown positive outcomes for concepts like geometry and fractions, transfer between programming and algebra has remained elusive. The literature identifies a number of conditions that must be met to facilitate transfer, including careful attention to content, software, and pedagogy. This dissertation is a feasibility study of Bootstrap, a curricular intervention based on best practices from the transfer and math-education literature. Bootstrap teaches students to build a video game by applying algebraic concepts and a problem solving technique in the programming domain, with the goal of transferring what they learn back into traditional algebra tasks. The study employed a mixed-methods analysis of six Bootstrap classes taught by math and computer science teachers, pairing pre- and post-tests with classroom observations and teacher interviews. Despite the use of a CS-derived problem solving technique, a programming language and a series of programming challenges, students were able to transfer what they learned into traditional algebra tasks and math teachers were found to be more successful at facilitating this transfer than their CS counterparts.
Education Policy, Leadership, and Instructional Practice
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

How, Tia Wah King Sing. „Fault coupling in finite functions“. Thesis, London South Bank University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261314.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Sibson, Keith. „Programming language abstractions for the global network“. Thesis, University of Glasgow, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368587.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Jones, Julia E. „Microcomputer programming package for the assessment of multiattribute value functions“. Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/51893.

Der volle Inhalt der Quelle
Annotation:
Research into multiattribute utility theory far outweighs current attempts to apply findings, while the need for usable decision techniques continues to increase. Current decision maker - analyst procedures involving decision making sessions and numerous manual calculations are considered to be overly time-consuming, except for the most important of complex decisions. The purpose of this thesis was to design and develop a microcomputer package utilizing recent improvements in decision theory to increase the efficiency of the decision making process. Algorithms for independence testing and parameter estimation have been developed for both continuous attributes and discrete attributes. Two separate packages, an additive value function package (DECISION) and a SMART technique package (SMART), are developed based on these algorithms and their validity tested by means of a case study. Both packages are written for use on a standard (256K) IBM PC microcomputer.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hamdan, Mohammad M. „A combinational framework for parallel programming using algorithmic skeletons“. Thesis, Heriot-Watt University, 2000. http://hdl.handle.net/10399/567.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Daniels, Anthony Charles. „A semantics for functions and behaviours“. Thesis, University of Kent, 1999. https://kar.kent.ac.uk/21730/.

Der volle Inhalt der Quelle
Annotation:
The functional animation language Fran allows animations to be programmed in a novel way. Fran provides an abstract datatype of ``behaviours'' that represent time varying values such as the position of moving objects, together with a simple set of operators for constructing behaviours. More generally, this approach has potential for other kinds of real-time systems that consist of interactive components that evolve over time. We introduce a small functional language, CONTROL, which has behaviours and operators that are similar to those in Fran. Our language improves on Fran in certain key areas, in particular, by eliminating start times and distinguishing between recursive functions and recursive behaviours. Our main contribution is to provide a complete formal semantics for CONTROL, which Fran lacks. This semantics provides a precise description of the language and can be used as the basis for proving that programs are correct. The semantics is defined under the assumption that real number computations and operations on behaviours are exact. Behaviours are modelled as functions of continuous time, and this approach is combined with the standard approach to the semantics of functional languages. This combination requires some novel techniques, particularly for handling recursively defined behaviours.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Lewis, Ian. „PrologPF : parallel logic and functions on the Delphi machine“. Thesis, University of Cambridge, 1998. https://www.repository.cam.ac.uk/handle/1810/221792.

Der volle Inhalt der Quelle
Annotation:
PrologPF is a parallelising compiler targeting a distributed system of general purpose workstations connected by a relatively low performance network. The source language extends standard Prolog with the integration of higher-order functions. The execution of a compiled PrologPF program proceeds in a similar manner to standard Prolog, but uses oracles in one of two modes. An oracle represents the sequence of clauses used to reach a given point in the problem search tree, and the same PrologPF executable can be used to build oracles, or follow oracles previously generated. The parallelisation strategy used by PrologPF proceeds in two phases, which this research shows can be interleaved. An initial phase searches the problem tree to a limited depth, recording the discovered incomplete paths. In the second phase these paths are allocated to the available processors in the network. Each processor follows its assigned paths and fully searches the referenced subtree, sending solutions back to a control processor. This research investigates the use of the technique with a one-time partitioning of the problem and no further scheduling communication, and with the recursive application of the partitioning technique to effect dynamic work reassignment. For a problem requiring all solutions to be found, execution completes when all the distributed processors have completed the search of their assigned subtrees. If one solution is required, the execution of all the path processors is terminated when the control processor receives the first solution. The presence of the extra-logical Prolog predicate cut in the user program conflicts with the use of oracles to represent valid open subtrees. PrologPF promotes the use of higher-order functional programming as an alternative to the use of cut. The combined language shows that functional support can be added as a consistent extension to standard Prolog.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Lin, Chin-Yee. „Interior point methods for convex optimization“. Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Tanksley, Latriece Y. „Interior point methods and kernel functions of a linear programming problem“. Click here to access thesis, 2009. http://www.georgiasouthern.edu/etd/archive/spring2009/latriece_y_tanksley/tanksley_latriece_y_200901_ms.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Georgia Southern University, 2009.
"A thesis submitted to the Graduate Faculty of Georgia Southern University in partial fulfillment of the requirements for the degree Master of Science." Directed by Goran Lesaja. ETD. Includes bibliographical references (p. 76) and appendices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Vorvick, Janet. „Evaluable Functions in the Godel Programming Language: Parsing and Representing Rewrite Rules“. PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/5195.

Der volle Inhalt der Quelle
Annotation:
The integration of a functional component into a logic language extends the expressive power of the language. One logic language which would benefit from such an extension is Godel, a prototypical language at the leading edge of the research in logic programming. We present a modification of the Godel parser which enables the parsing of evaluable functions in Godel. As the first part of an extended Godel, the parser produces output similar to the output from the original Godel parser, ensuring that Godel modules are properly handled by the extended-Godel parser. Parser output is structured to simplify, as much as possible, the future task of creating an extended compiler implementing evaluation of functions using narrowing. We describe the structure of the original Godel parser, the objects produced by it, the modifications made for the implementation of the extended Godel and the motivation for those modifications. The ultimate goal of this research is production of a functional component for Godel which evaluates user-defined functions with needed narrowing, a strategy which is sound, complete, and optimal for inductively sequential rewrite systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Jung, Hoon. „Optimal inventory policies for an economic order quantity models under various cost functions /“. free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3012983.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Marinósson, Sigurour Freyr. „Stability analysis of nonlinear systems with linear programming a Lyapunov functions based approach /“. [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=982323697.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Otero, Fernando E. B. „New ant colony optimisation algorithms for hierarchial classification of protein functions“. Thesis, University of Kent, 2010. http://www.cs.kent.ac.uk/pubs/2010/3057.

Der volle Inhalt der Quelle
Annotation:
Ant colony optimisation (ACO) is a metaheuristic to solve optimisation problems inspired by the foraging behaviour of ant colonies. It has been successfully applied to several types of optimisation problems, such as scheduling and routing, and more recently for the discovery of classification rules. The classification task in data mining aims at predicting the value of a given goal attribute for an example, based on the values of a set of predictor attributes for that example. Since real-world classification problems are generally described by nominal (categorical or discrete) and continuous (real-valued) attributes, classification algorithms are required to be able to cope with both nominal and continuous attributes. Current ACO classification algorithms have been designed with the limitation of discovering rules using nominal attributes describing the data. Furthermore, they also have the limitation of not coping with more complex types of classification problems e.g., hierarchical multi-label classification problems. This thesis investigates the extension of ACO classification algorithms to cope with the aforementioned limitations. Firstly, a method is proposed to extend the rule construction process of ACO classification algorithms to cope with continuous attributes directly. Four new ACO classification algorithms are presented, as well as a comparison between them and well-known classification algorithms from the literature. Secondly, an ACO classification algorithm for the hierarchical problem of protein function prediction which is a major type of bioinformatics problem addressed in this thesis is presented. Finally, three different approaches to extend ACO classification algorithms to the more complex case of hierarchical multi-label classification are described, elaborating on the ideas of the proposed hierarchical classification ACO algorithm. These algorithms are compare against state-of-the-art decision tree induction algorithms for hierarchical multi-label classification in the context of protein function prediction. The computational results of experiments with a wide range of data sets including challenging protein function prediction data sets with very large number.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Vandenbussche, Dieter. „Polyhedral approaches to solving nonconvex quadratic programs“. Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/23385.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Gatica, Ricardo A. „A binary dynamic programming problem with affine transitions and reward functions : properties and algorithm“. Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/32839.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Olivier, Hannes Friedel. „The expected runtime of the (1+1) evolutionary algorithm on almost linear functions“. Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1356253.

Der volle Inhalt der Quelle
Annotation:
This Thesis expands the theoretical research done in the area of evolutionary algorithms. The (1+1)EA is a simple algorithm which allows to gain some insight in the behaviour of these randomized search heuristics. This work shows ways to possible improve on existing bounds. The general good runtime of the algorithm on linear functions is also proven for classes of quadratic functions. These classes are defined by the relative size of the quadratic and the linear weights. One proof of the paper looks at a worst case algorithm which always shows a worst case behaviour than many other functions. This algorithm is used as an upper bound for a lot of different classes.
Department of Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Vargyas, Emese Tünde. „Duality for convex composed programming problems“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2004. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200401793.

Der volle Inhalt der Quelle
Annotation:
The goal of this work is to present a conjugate duality treatment of composed programming as well as to give an overview of some recent developments in both scalar and multiobjective optimization. In order to do this, first we study a single-objective optimization problem, in which the objective function as well as the constraints are given by composed functions. By means of the conjugacy approach based on the perturbation theory, we provide different kinds of dual problems to it and examine the relations between the optimal objective values of the duals. Given some additional assumptions, we verify the equality between the optimal objective values of the duals and strong duality between the primal and the dual problems, respectively. Having proved the strong duality, we derive the optimality conditions for each of these duals. As special cases of the original problem, we study the duality for the classical optimization problem with inequality constraints and the optimization problem without constraints. The second part of this work is devoted to location analysis. Considering first the location model with monotonic gauges, it turns out that the same conjugate duality principle can be used also for solving this kind of problems. Taking in the objective function instead of the monotonic gauges several norms, investigations concerning duality for different location problems are made. We finish our investigations with the study of composed multiobjective optimization problems. In doing like this, first we scalarize this problem and study the scalarized one by using the conjugacy approach developed before. The optimality conditions which we obtain in this case allow us to construct a multiobjective dual problem to the primal one. Additionally the weak and strong duality are proved. In conclusion, some special cases of the composed multiobjective optimization problem are considered. Once the general problem has been treated, particularizing the results, we construct a multiobjective dual for each of them and verify the weak and strong dualities
In dieser Arbeit wird, anhand der sogenannten konjugierten Dualitätstheorie, ein allgemeines Dualitätsverfahren für die Untersuchung verschiedener Optimierungsaufgaben dargestellt. Um dieses Ziel zu erreichen wird zuerst eine allgemeine Optimierungsaufgabe betrachtet, wobei sowohl die Zielfunktion als auch die Nebenbedingungen zusammengesetzte Funktionen sind. Mit Hilfe der konjugierten Dualitätstheorie, die auf der sogenannten Störungstheorie basiert, werden für die primale Aufgabe drei verschiedene duale Aufgaben konstruiert und weiterhin die Beziehungen zwischen deren optimalen Zielfunktionswerten untersucht. Unter geeigneten Konvexitäts- und Monotonievoraussetzungen wird die Gleichheit dieser optimalen Zielfunktionswerte und zusätzlich die Existenz der starken Dualität zwischen der primalen und den entsprechenden dualen Aufgaben bewiesen. In Zusammenhang mit der starken Dualität werden Optimalitätsbedingungen hergeleitet. Die Ergebnisse werden abgerundet durch die Betrachtung zweier Spezialfälle, nämlich die klassische restringierte bzw. unrestringierte Optimierungsaufgabe, für welche sich die aus der Literatur bekannten Dualitätsergebnisse ergeben. Der zweite Teil der Arbeit ist der Dualität bei Standortproblemen gewidmet. Dazu wird ein sehr allgemeines Standortproblem mit konvexer zusammengesetzter Zielfunktion in Form eines Gauges formuliert, für das die entsprechenden Dualitätsaussagen abgeleitet werden. Als Spezialfälle werden Optimierungsaufgaben mit monotonen Normen betrachtet. Insbesondere lassen sich Dualitätsaussagen und Optimalitätsbedingungen für das klassische Weber und Minmax Standortproblem mit Gauges als Zielfunktion herleiten. Das letzte Kapitel verallgemeinert die Dualitätsaussagen, die im zweiten Kapitel erhalten wurden, auf multikriterielle Optimierungsprobleme. Mit Hilfe geeigneter Skalarisierungen betrachten wir zuerst ein zu der multikriteriellen Optimierungsaufgabe zugeordnetes skalares Problem. Anhand der in diesem Fall erhaltenen Optimalitätsbedingungen formulieren wir das multikriterielle Dualproblem. Weiterhin beweisen wir die schwache und, unter bestimmten Annahmen, die starke Dualität. Durch Spezialisierung der Zielfunktionen bzw. Nebenbedingungen resultieren die klassischen konvexen Mehrzielprobleme mit Ungleichungs- und Mengenrestriktionen. Als weitere Anwendungen werden vektorielle Standortprobleme betrachtet, zu denen wir entsprechende duale Aufgaben formulieren
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Arvidsson, Staffan. „Actors and higher order functions : A Comparative Study of Parallel Programming Language Support for Bioinformatics“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-242739.

Der volle Inhalt der Quelle
Annotation:
Parallel programming can sometimes be a tedious task when dealing with problems like race conditions and synchronization. Functional programming can greatly reduce the complexity of parallelization by removing side effects and variables, eliminating the need for locks and synchronization. This thesis assesses the applicability of functional programming and the actor model using the field of bioinformatics as a case study, focusing on genome assembly. Functional programming is found to provide parallelization at a high abstraction level in some cases, but in most of the program there is no way to provide parallelization without adding synchronization and non-pure functional code. The actor model facilitate parallelization of a greater part of the program but increases the program complexity due to communication and synchronization between actors. Neither of the approaches gave efficient speedup due to the characteristics of the algorithm that was implemented, which proved to be memory bound. A shared memory parallelization thus showed to be inefficient and that a need for distributed implementations are needed for achieving speedup for genome assemblers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hall, Bryan, University of Western Sydney und of Science Technology and Environment College. „Energy and momentum conservation in Bohm's Model for quantum mechanics“. THESIS_CSTE_XXX_Hall_B.xml, 2004. http://handle.uws.edu.au:8081/1959.7/717.

Der volle Inhalt der Quelle
Annotation:
Bohm's model for quantum mechanics is examined and a well-known drawback of the model is considered, namely the fact that the model does not conserve energy and momentum.It is shown that the Lagrangian formalism and the use of energy-momentum tensors provide a way of addressing this non-conservation aspect once the model is considered from the point of view of an interacting particle-field system. The full mathematical formulation that is then presented demonstrates that conservation can be reintroduced without disrupting the present agreement of Bohm's model with experiment.
Doctor of Philosphy (PhD)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Gerard, Ulysse. „Computing with relations, functions, and bindings“. Thesis, Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAX005.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s'inscrit dans la longue tradition de l'étude des relations entre logique mathématique et calcul et plus spécifiquement de la programmation déclarative. Le document est divisé en deux contributions principales. Chacune d'entre-elles utilise des résultats récents de la théorie de la démonstration pour développer de techniques novatrices utilisant déduction logique et fonctions pour effectuer des calculs. La première contribution de ce travail consiste en la description et la formalisation d'une nouvelle technique utilisant le mécanisme de la focalisation (un moyen de guider la recherche de preuve) pour distinguer les calculs fonctionnels qui se dissimulent dans les preuves déductives. À cet effet nous formulons un calcul des séquents focalisé pour l'arithmétique de Heyting où points-fixes et égalité sont définis comme des connecteurs logiques et nous décrivons une méthode pour toujours placer les prédicats singletons dans des phases négatives de la preuve, les identifiant ainsi avec un calcul fonctionnel. Cette technique n'étend en aucune façon la logique sous-jacente: ni opérateur de choix, ni règles de réécritures ne sont nécessaires. Notre logique reste donc purement relationnelle même lorsqu'elle calcule des fonctions. La seconde contribution de cette thèse et le design d'un nouveau langage de programmation fonctionnel: MLTS. De nouveau, nous utilisons des travaux théoriques récents en logique: la sémantique de mlts est ainsi une théorie au sein de la logique G, la logique de raisonnement de l'assistant de preuve Abella. La logique G utilise un opérateur spécifique: Nabla, qui est un quantificateur sur des noms "frais" et autorise un traitement naturel des pruves manipulant des objets pouvant contenir des lieurs. Ce traitement s'appuie sur la gestion naturelle des lieurs fournie par le calcul des séquents. La syntaxe de MLTS est basée sur celle du langage de programmation OCaml mais fournit des constructions additionnelles permettant aux lieurs présents dans les termes de se déplacer au niveau du programme. De plus, toutes les opérations sur la syntaxe respectent l'alpha et la béta conversion. Ces deux aspects forment l'approche syntaxique des lieurs appelée lambda-tree syntax. Un prototype d'implémentation du langage est fourni, permettant à chacun d'expérimenter facilement en ligne (url{https://trymlts.github.io})
The present document pursues the decades-long study of the interactions between mathematical logic and functional computation, and more specifically of declarative programming. This thesis is divided into two main contributions. Each one of those make use of modern proof theory results to design new ways to compute with relations and with functions. The first contribution of this work is the description and formalization of a new technique that leverages the focusing mechanism (a way to guide proof-search) to reveal functional computation concealed in deductive proofs. To that extent we formulate a focused sequent calculus proof system for Heyting arithmetic where fixed points and term equality are logical connectives and describe a means to always drive singleton predicates into negative phases of the proof, thus identifying them with functional computation. This method does not extend the underlying logic in any way: no choice principle nor equality theory nor rewriting rules are needed. As a result, our logic remains purely relational even when it is computing functions. The second contribution of this thesis is the design of a new functional programming language: MLTS. Again, we make use of recent work in logic: the semantics of MLTS is a theory inside G-logic, the reasoning logic of the Abella interactive theorem prover. G-logic uses a specific operator: Nabla, a fresh-name quantifier which allows for a natural treatment of proofs over structures with bindings based on the natural handling of bindings of the sequent calculus. The syntax of MLTS is based on the programming language OCaml but provides additional sites so that term-level bindings can move to programming level bindings. Moreover, all operations on syntax respect alpha-beta-conversion. Together these two tenets form the lambda-tree syntax approach to bindings. The resulting language was given a prototype implementation that anyone can conveniently try online (https://trymlts.github.io)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Coffin, Lorraine. „The effect of hemisphericity and field dependence on performance on a programming task /“. Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59249.

Der volle Inhalt der Quelle
Annotation:
This study investigated the effects of hemisphericity and field dependence on programming skills. Twenty-five undergraduate university students from two introductory Logo programming courses completed the study. Results suggested that hemisphericity is related to the complexity of program structure (tree depth). Supplementary analyses indicated a negative correlation between previous programming experience and the use of recursion. Implications for education and suggestions for further research are discussed, and specific implications regarding the teaching of Logo are given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Altner, Douglas S. „Advancements on problems involving maximum flows“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24828.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Ozlem Ergun; Committee Member: Dana Randall; Committee Member: Joel Sokol; Committee Member: Shabbir Ahmed; Committee Member: William Cook.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Hall, Bryan. „Energy and momentum conservation in Bohm's Model for quantum mechanics“. View thesis, 2004. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20040507.155043/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Lin, Chungping. „The RMT (Recursive multi-threaded) tool: A computer aided software engineeering tool for monitoring and predicting software development progress“. CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1787.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Barry, Bobbi J. „Needed Narrowing as the Computational Strategy of Evaluable Functions in an Extension of Goedel“. PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/4915.

Der volle Inhalt der Quelle
Annotation:
A programming language that combines the best aspects of both the functional and logic paradigms with a complete evaluation strategy has been a goal of a Portland State University project team for the last several years. I present the third in a series of modifications to the compiler of the logic programming language Goedel which reaches this goal. This enhancement of Goedel's compiler translates user-defined functions in the form of rewrite rules into code that performs evaluation of these functions by the strategy of needed narrowing. In addition, Goedel's mechanism that evaluates predicates is supplemented so that needed narrowing is still maintained as the evaluation strategy when predicates possess functional arguments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Bradley, Jay. „Reinforcement learning for qualitative group behaviours applied to non-player computer game characters“. Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4784.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates how to train the increasingly large cast of characters in modern commercial computer games. Modern computer games can contain hundreds or sometimes thousands of non-player characters that each should act coherently in complex dynamic worlds, and engage appropriately with other non-player characters and human players. Too often, it is obvious that computer controlled characters are brainless zombies portraying the same repetitive hand-coded behaviour. Commercial computer games would seem a natural domain for reinforcement learning and, as the trend for selling games based on better graphics is peaking with the saturation of game shelves with excellent graphics, it seems that better artificial intelligence is the next big thing. The main contribution of this thesis is a novel style of utility function, group utility functions, for reinforcement learning that could provide automated behaviour specification for large numbers of computer game characters. Group utility functions allow arbitrary functions of the characters’ performance to represent relationships between characters and groups of characters. These qualitative relationships are learned alongside the main quantitative goal of the characters. Group utility functions can be considered a multi-agent extension of the existing programming by reward method and, an extension of the team utility function to be more generic by replacing the sum function with potentially any other function. Hierarchical group utility functions, which are group utility functions arranged in a tree structure, allow character group relationships to be learned. For illustration, the empirical work shown uses the negative standard deviation function to create balanced (or equal performance) behaviours. This balanced behaviour can be learned between characters, groups and also, between groups and single characters. Empirical experiments show that a balancing group utility function can be used to engender an equal performance between characters, groups, and groups and single characters. It is shown that it is possible to trade some amount of quantitatively measured performance for some qualitative behaviour using group utility functions. Further experiments show how the results degrade as expected when the number of characters and groups is increased. Further experimentation shows that using function approximation to approximate the learners’ value functions is one possible way to overcome the issues of scale. All the experiments are undertaken in a commercially available computer game engine. In summary, this thesis contributes a novel type of utility function potentially suitable for training many computer game characters and, empirical work on reinforcement learning used in a modern computer game engine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Harper, Robin Thomas Ross Computer Science &amp Engineering Faculty of Engineering UNSW. „Enhancing grammatical evolution“. Awarded by:University of New South Wales. Computer Science & Engineering, 2010. http://handle.unsw.edu.au/1959.4/44843.

Der volle Inhalt der Quelle
Annotation:
Grammatical Evolution (GE) is a method of utilising a general purpose evolutionary algorithm to ???evolve??? programs written in an arbitrary BNF grammar. This thesis extends GE as follows: GE as an extension of Genetic Programming (GP) A novel method of automatically extracting information from the grammar is introduced. This additional information allows the use of GP style crossover which in turn allows GE to perform identically to a strongly typed GP system as well as a non-typed (or canonical) GP system. Two test problems are presented one which is more easily solved by the GP style crossover and one which favours the tradition GE ???Ripple Crossover???. With this new crossover operator GE can now emulate GP (as well as retaining its own unique features) and can therefore now be seen as an extension of GP. Dynamically Defined Functions An extension to the BNF grammar is presented which allows the use of dynamically defined functions (DDFs). DDFs provide an alternative to the traditional approach of Automatically Defined Functions (ADFs) but have the advantage that the number of functions and their parameters do not need to be specified by the user in advance. In addition DDFs allow the architecture of individuals to change dynamically throughout the course of the run without requiring the introduction of any new form of operator. Experimental results are presented confirming the effectiveness of DDFs. Self-Selecting (or variable) crossover. A self-selecting operator is introduced which allows the system to determine, during the course of the run, which crossover operator to apply; this is tested over several problem domains and (especially where small populations are used) is shown to be effective in aiding the system to overcome local optima. Spatial Co-Evolution in Age Layered Planes (SCALP) A method of combining Hornby???s ALPS metaheuristic and a spatial co-evolution system used by Mitchell is presented; the new SCALP system is tested over three problem domains of increasing difficulty and performs extremely well in each of them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

RODRIGUES, JUNIOR ORLANDO. „Aplicacao de modelos metabolicos para a determinacao de funcoes de excrecao e retencao“. reponame:Repositório Institucional do IPEN, 1994. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10344.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-10-09T12:37:35Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:03:23Z (GMT). No. of bitstreams: 1 02232.pdf: 3951264 bytes, checksum: dbd0c0050958a2578e3ec58da9f6ac83 (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Akteke-ozturk, Basak. „New Approaches To Desirability Functions By Nonsmooth And Nonlinear Optimization“. Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612649/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Desirability Functions continue to attract attention of scientists and researchers working in the area of multi-response optimization. There are many versions of such functions, differing mainly in formulations of individual and overall desirability functions. Derringer and Suich&rsquo
s desirability functions being used throughout this thesis are still the most preferred ones in practice and many other versions are derived from these. On the other hand, they have a drawback of containing nondifferentiable points and, hence, being nonsmooth. Current approaches to their optimization, which are based on derivative-free search techniques and modification of the functions by higher-degree polynomials, need to be diversified considering opportunities offered by modern nonlinear (global) optimization techniques and related softwares. A first motivation of this work is to develop a new efficient solution strategy for the maximization of overall desirability functions which comes out to be a nonsmooth composite constrained optimization problem by nonsmooth optimization methods. We observe that individual desirability functions used in practical computations are of mintype, a subclass of continuous selection functions. To reveal the mechanism that gives rise to a variation in the piecewise structure of desirability functions used in practice, we concentrate on a component-wise and generically piecewise min-type functions and, later on, max-type functions. It is our second motivation to analyze the structural and topological properties of desirability functions via piecewise max-type functions. In this thesis, we introduce adjusted desirability functions based on a reformulation of the individual desirability functions by a binary integer variable in order to deal with their piecewise definition. We define a constraint on the binary variable to obtain a continuous optimization problem of a nonlinear objective function including nondifferentiable points with the constraints of bounds for factors and responses. After describing the adjusted desirability functions on two well-known problems from the literature, we implement modified subgradient algorithm (MSG) in GAMS incorporating to CONOPT solver of GAMS software for solving the corresponding optimization problems. Moreover, BARON solver of GAMS is used to solve these optimization problems including adjusted desirability functions. Numerical applications with BARON show that this is a more efficient alternative solution strategy than the current desirability maximization approaches. We apply negative logarithm to the desirability functions and consider the properties of the resulting functions when they include more than one nondifferentiable point. With this approach we reveal the structure of the functions and employ the piecewise max-type functions as generalized desirability functions (GDFs). We introduce a suitable finite partitioning procedure of the individual functions over their compact and connected interval that yield our so-called GDFs. Hence, we construct GDFs with piecewise max-type functions which have efficient structural and topological properties. We present the structural stability, optimality and constraint qualification properties of GDFs using that of max-type functions. As a by-product of our GDF study, we develop a new method called two-stage (bilevel) approach for multi-objective optimization problems, based on a separation of the parameters: in y-space (optimization) and in x-space (representation). This approach is about calculating the factor variables corresponding to the ideal solutions of each individual functions in y, and then finding a set of compromised solutions in x by considering the convex hull of the ideal factors. This is an early attempt of a new multi-objective optimization method. Our first results show that global optimum of the overall problem may not be an element of the set of compromised solution. The overall problem in both x and y is extended to a new refined (disjunctive) generalized semi-infinite problem, herewith analyzing the stability and robustness properties of the objective function. In this course, we introduce the so-called robust optimization of desirability functions for the cases when response models contain uncertainty. Throughout this thesis, we give several modifications and extensions of the optimization problem of overall desirability functions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Lee, Robert. „Teaching Algebra through Functional Programming:An Analysis of the Bootstrap Curriculum“. BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3519.

Der volle Inhalt der Quelle
Annotation:
Bootstrap is a computer-programming curriculum that teaches students to program video games using Racket, a functional programming language based on algebraic syntax. This study investigated the relationship between learning to program video games from a Bootstrap course and the resulting effect on students' understanding of algebra. Courses in three different schools, lasting about six weeks each, were studied. Control and treatment groups were given a pre and post algebra assessment. A qualitative component consisting of observations and interviews was also used to further triangulate findings. Statistical analysis revealed that students who completed the Bootstrap course gained a significantly better understanding of variables and a suggestive improvement in understanding functions. In the assessments, students failed to demonstrate a transfer of the advanced concepts of function composition and piecewise functions from programming to algebraic notation. Interviews with students demonstrated that with coaching, students were able to relate functions written in Racket to functions written in algebraic notation, but were not yet able to transfer their experience of function composition from programming to algebra.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Durier, Adrien. „Unique solution techniques for processes and functions“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN016.

Der volle Inhalt der Quelle
Annotation:
La méthode de preuve par bisimulation est un pilier de la théorie de la concurrence et des langages de programmation. Cette technique permet d’établir que deux programmes, ou deux protocoles distribués, sont égaux, au sens où l’on peut substituer l’un par l’autre sans affecter le comportement global du système. Les preuves par bisimulation sont souvent difficiles et techniquement complexes. De ce fait, diverses techniques ont été proposées pour faciliter de telles preuve. Dans cette thèse, nous étudions une telle technique de preuve pour la bisimulation, fondée sur l’unicité des solutions d’équations. Pour démontrer que deux programmes sont égaux, on prouve qu’ils sont solution de la même équation, à condition que l’équation satisfasse la propriété d’unicité des solutions : deux solutions de l’équation sont nécessairement égales. Nous utilisons cette technique pour répondre à une question ouverte, à savoir le problème de full abstraction pour l’encodage, dû à Milner, du λ-calcul en appel par valeur dans le π-calcul
The bisimulation proof method is a landmark of the theory of concurrency and programming languages: it is a proof technique used to establish that two programs, or two distributed protocols, are equal, meaning that they can be freely substituted for one another without modifying the global observable behaviour. Such proofs are often difficult and tedious; hence, many proof techniques have been proposed to enhance this method, simplifying said proofs. We study such a technique based on ’unique solution of equations’. In order to prove that two programs are equal, we show that they are solution of the same recursive equation, as long as the equation has the ’unique solution property’: two of its solutions are always equal. We propose a guarantee to ensure that such equations do have a unique solution. We test this technique against a long- standing open problem: the problem of full abstraction for Milner’s encoding of the call-by-value λ-calculus in the π-calculus
La bisimulazione è una tecnica di prova fondamentale in teoria della concorrenza e dei linguaggi di programmazione. Questa tecnica viene usata per dimostrare che due programmi, o due protocolli distribuiti, sono uguali, nel senso che l’uno puòsostituire l’altro senza modificare il comportamento globale del sistema. Le prove di bisimulazione sono spesso difficili e tecnicamente pesanti. Per questa ragione, varie tecniche di prova sono state introdotte per facilitare le prove di bisimulazione.In questo documento viene studiata tale tecnica, che sfrutta l’unicità delle soluzioni di equazioni. Per dimostrare che due programmi sono uguali, si stabilisce che sono soluzioni della stessa equazione ricorsiva, dal momento in cuil’equazione soddisfa una proprietà di “unicità delle soluzioni”: ogni due soluzioni di questa equazione sono uguali. Questa tecnica viene usata per rispondere alla questione della full abstraction per l’encodaggio del _-calcolo in call-by-value nel_-calcolo, proposto inizialmente da R. Milner
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Cheon, Myun-Seok. „Global Optimization of Monotonic Programs: Applications in Polynomial and Stochastic Programming“. Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04152005-130317/unrestricted/Cheon%5FMyunSeok%5F200505%5Fphd.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Industrial & Systems Engineering, Georgia Institute of Technology, 2005.
Barnes, Earl, Committee Member ; Shapiro, Alex, Committee Member ; Realff, Matthew, Committee Member ; Al-Khayyal, Faiz, Committee Chair ; Ahmed, Shabbir, Committee Co-Chair. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Widera, Paweł. „Automated design of energy functions for protein structure prediction by means of genetic programming and improved structure similarity assessment“. Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11394/.

Der volle Inhalt der Quelle
Annotation:
The process of protein structure prediction is a crucial part of understanding the function of the building blocks of life. It is based on the approximation of a protein free energy that is used to guide the search through the space of protein structures towards the thermodynamic equilibrium of the native state. A function that gives a good approximation of the protein free energy should be able to estimate the structural distance of the evaluated candidate structure to the protein native state. This correlation between the energy and the similarity to the native is the key to high quality predictions. State-of-the-art protein structure prediction methods use very simple techniques to design such energy functions. The individual components of the energy functions are created by human experts with the use of statistical analysis of common structural patterns that occurs in the known native structures. The energy function itself is then defined as a simple weighted sum of these components. Exact values of the weights are set in the process of maximisation of the correlation between the energy and the similarity to the native measured by a root mean square deviation between coordinates of the protein backbone. In this dissertation I argue that this process is oversimplified and could be improved on at least two levels. Firstly, a more complex functional combination of the energy components might be able to reflect the similarity more accurately and thus improve the prediction quality. Secondly, a more robust similarity measure that combines different notions of the protein structural similarity might provide a much more realistic baseline for the energy function optimisation. To test these two hypotheses I have proposed a novel approach to the design of energy functions for protein structure prediction using a genetic programming algorithm to evolve the energy functions and a structural similarity consensus to provide a reference similarity measure. The best evolved energy functions were found to reflect the similarity to the native better than the optimised weighted sum of terms, and therefore opening a new interesting area of research for the machine learning techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Potaptchik, Marina. „Portfolio Selection Under Nonsmooth Convex Transaction Costs“. Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2940.

Der volle Inhalt der Quelle
Annotation:
We consider a portfolio selection problem in the presence of transaction costs. Transaction costs on each asset are assumed to be a convex function of the amount sold or bought. This function can be nondifferentiable in a finite number of points. The objective function of this problem is a sum of a convex twice differentiable function and a separable convex nondifferentiable function. We first consider the problem in the presence of linear constraints and later generalize the results to the case when the constraints are given by the convex piece-wise linear functions.

Due to the special structure, this problem can be replaced by an equivalent differentiable problem in a higher dimension. It's main drawback is efficiency since the higher dimensional problem is computationally expensive to solve.

We propose several alternative ways to solve this problem which do not require introducing new variables or constraints. We derive the optimality conditions for this problem using subdifferentials. First, we generalize an active set method to this class of problems. We solve the problem by considering a sequence of equality constrained subproblems, each subproblem having a twice differentiable objective function. Information gathered at each step is used to construct the subproblem for the next step. We also show how the nonsmoothness can be handled efficiently by using spline approximations. The problem is then solved using a primal-dual interior-point method.

If a higher accuracy is needed, we do a crossover to an active set method. Our numerical tests show that we can solve large scale problems efficiently and accurately.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Reinke, Claus [Verfasser]. „Functions, Frames, and Interactions-completing a lambda-calculus-based purely functional language with respect to programming-in-the-large and interactions with runtime environments / Claus Reinke“. Kiel : Universitätsbibliothek Kiel, 1998. http://d-nb.info/1080332626/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Basnayaka, Punya A. „Low-Level Programming of a 9-Degree-of-Freedom Wheelchair-Mounted Robotic Arm with the Application of Different User Interfaces“. Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1570.

Der volle Inhalt der Quelle
Annotation:
Implementation of C++ and Matlab based control of University of South Florida's latest wheelchair-mounted robotic arm (USF WMRA-II), which has 9 degrees-of-freedom, was carried out under this Master's thesis research. First, the rotational displacements about the 7 joints of the robotic arm were calibrated. It was followed by setting the control gains of the motors. Then existing high-level programs developed using C++ and Matlab for USF WMRA-I were modified for WMRA-II. The required low-level programs to provide complete kinematics of the joint movements to the controller board of WMRA-II (Galil DMC-2183) were developed using C++. A test GUI was developed using C++ to troubleshoot the control program and to evaluate the operation of the robotic arm. It was found that WMRA-II has higher repeatability, accuracy and manipulability as well as lower power consumption than WMRA-I. Touch-Screen and Spaceball user interfaces were successfully implemented to facilitate people with different disabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Schreiber, Alex Joachim Ernst [Verfasser], Oliver [Akademischer Betreuer] Junge und Matthias [Akademischer Betreuer] Gerdts. „Dynamic programming with radial basis functions and Shepard's method / Alex Joachim Ernst Schreiber. Betreuer: Oliver Junge. Gutachter: Matthias Gerdts ; Oliver Junge“. München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1096459124/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Vanden, Berghen Frank. „Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions“. Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.

Der volle Inhalt der Quelle
Annotation:
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e. the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,) or a program (a Unix, Windows, Solaris, executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Cox, Bruce. „Applications of accuracy certificates for problems with convex structure“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39489.

Der volle Inhalt der Quelle
Annotation:
Applications of accuracy certificates for problems with convex structure   This dissertation addresses the efficient generation and potential applications of accuracy certificates in the framework of “black-box-represented” convex optimization problems - convex problems where the objective and the constraints are represented by  “black boxes” which, given on input a value x of the argument, somehow (perhaps in a fashion unknown to the user) provide on output the values and the derivatives of the objective and the constraints at x. The main body of the dissertation can be split into three parts.  In the first part, we provide our background --- state of the art of the theory of accuracy certificates for black-box-represented convex optimization. In the second part, we extend the toolbox of black-box-oriented convex optimization algorithms with accuracy certificates by equipping with these certificates a state-of-the-art algorithm for large-scale nonsmooth black-box-represented problems with convex structure, specifically, the Non-Euclidean Restricted Memory Level (NERML) method. In the third part, we present several novel academic applications of accuracy certificates. The dissertation is organized as follows: In Chapter 1, we motivate our research goals and present a detailed summary of our results. In Chapter 2, we outline the relevant background, specifically, describe four generic black-box-represented generic problems with convex structure (Convex Minimization, Convex-Concave Saddle Point, Convex Nash Equilibrium, and Variational Inequality with Monotone Operator), and outline the existing theory of accuracy certificates for these problems. In Chapter 3, we develop techniques for equipping with on-line accuracy certificates the state-of-the-art NERML algorithm for large-scale nonsmooth problems with convex structure, both in the cases when the domain of the problem is a simple solid and in the case when the domain is given by Separation oracle. In Chapter 4, we develop  several novel academic applications of accuracy certificates, primarily to (a) efficient certifying emptiness of the intersection of finitely many solids given by Separation oracles, and (b) building efficient algorithms for convex minimization over solids given by Linear Optimization oracles (both precise and approximate). In Chapter 5, we apply accuracy certificates to efficient decomposition of “well structured” convex-concave saddle point problems, with applications to computationally attractive decomposition of a large-scale LP program with the constraint matrix which becomes block-diagonal after eliminating a relatively small number of possibly dense columns (corresponding to “linking variables”) and possibly dense rows (corresponding to “linking constraints”).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Farka, Jan. „Možnosti řídicího systému Heidenhain při programování CNC obráběcích strojů“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444272.

Der volle Inhalt der Quelle
Annotation:
Modern machine tools are equipped with control systems, which are the imaginary brain of the whole machine. These control systems are developed, further managed, and constantly innovated by many companies, so it is clear that there is a kind of competition in the field of control systems. The aim of this work is mainly to describe the possibilities of one of the greats among developers of control systems from the company Heidenhain and also a thorough comparison of this system with systems from other manufacturers. Last but not least, other Heidenhain products and their possibility of use in practice will be presented in the work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Luizelli, Marcelo Caggiani. „Scalable cost-efficient placement and chaining of virtual network functions“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169337.

Der volle Inhalt der Quelle
Annotation:
A Virtualização de Funções de Rede (NFV – Network Function Virtualization) é um novo conceito arquitetural que está remodelando a operação de funções de rede (e.g., firewall, gateways e proxies). O conceito principal de NFV consiste em desacoplar a lógica de funções de rede dos dispositivos de hardware especializados e, desta forma, permite a execução de imagens de software sobre hardware de prateleira (COTS – Commercial Off-The-Shelf). NFV tem o potencial para tornar a operação das funções de rede mais flexíveis e econômicas, primordiais em ambientes onde o número de funções implantadas pode chegar facilmente à ordem de centenas. Apesar da intensa atividade de pesquisa na área, o problema de posicionar e encadear funções de rede virtuais (VNF – Virtual Network Functions) de maneira escalável e com baixo custo ainda apresenta uma série de limitações. Mais especificamente, as estratégias existentes na literatura negligenciam o aspecto de encadeamento de VNFs (i.e., objetivam sobretudo o posicionamento), não escalam para o tamanho das infraestruturas NFV (i.e., milhares de nós com capacidade de computação) e, por último, baseiam a qualidade das soluções obtidas em custos operacionais não representativos. Nesta tese, aborda-se o posicionamento e o encadeamento de funções de rede virtualizadas (VNFPC – Virtual Network Function Placement and Chaining) como um problema de otimização no contexto intra- e inter-datacenter. Primeiro, formaliza-se o problema VNFPC e propõe-se um modelo de Programação Linear Inteira (ILP) para resolvêlo. O objetivo consiste em minimizar a alocação de recursos, ao mesmo tempo que atende aos requisitos e restrições de fluxo de rede. Segundo, aborda-se a escalabilidade do problema VNFPC para resolver grandes instâncias do problema (i.e., milhares de nós NFV). Propõe-se um um algoritmo heurístico baseado em fix-and-optimize que incorpora a meta-heurística Variable Neighborhood Search (VNS) para explorar eficientemente o espaço de solução do problema VNFPC. Terceiro, avalia-se as limitações de desempenho e os custos operacionais de estratégias típicas de aprovisionamento ambientes reais de NFV. Com base nos resultados empíricos coletados, propõe-se um modelo analítico que estima com alta precisão os custos operacionais para requisitos de VNFs arbitrários. Quarto, desenvolve-se um mecanismo para a implantação de encadeamentos de VNFs no contexto intra-datacenter. O algoritmo proposto (OCM – Operational Cost Minimization) baseia-se em uma extensão da redução bem conhecida do problema de emparelhamento ponderado (i.e., weighted perfect matching problem) para o problema de fluxo de custo mínimo (i.e., min-cost flow problem) e considera o desempenho das VNFs (e.g., requisitos de CPU), bem como os custos operacionais estimados. Os resultados alcaçados mostram que o modelo ILP proposto para o problema VNFPC reduz em até 25% nos atrasos fim-a-fim (em comparação com os encadeamentos observados nas infra-estruturas tradicionais) com um excesso de provisionamento de recursos aceitável – limitado a 4%. Além disso, os resultados evidenciam que a heurística proposta (baseada em fix-and-optimize) é capaz de encontrar soluções factíveis de alta qualidade de forma eficiente, mesmo em cenários com milhares de VNFs. Além disso, provê-se um melhor entendimento sobre as métricas de desempenho de rede (e.g., vazão, consumo de CPU e capacidade de processamento de pacotes) para as estratégias típicas de implantação de VNFs adotadas infraestruturas NFV. Por último, o algoritmo proposto no contexto intra-datacenter (i.e. OCM) reduz significativamente os custos operacionais quando comparado aos mecanismos de posicionamento típicos uti
Network Function Virtualization (NFV) is a novel concept that is reshaping the middlebox arena, shifting network functions (e.g. firewall, gateways, proxies) from specialized hardware appliances to software images running on commodity hardware. This concept has potential to make network function provision and operation more flexible and cost-effective, paramount in a world where deployed middleboxes may easily reach the order of hundreds. Despite recent research activity in the field, little has been done towards scalable and cost-efficient placement & chaining of virtual network functions (VNFs) – a key feature for the effective success of NFV. More specifically, existing strategies have neglected the chaining aspect of NFV (focusing on efficient placement only), failed to scale to hundreds of network functions and relied on unrealistic operational costs. In this thesis, we approach VNF placement and chaining as an optimization problem in the context of Inter- and Intra-datacenter. First, we formalize the Virtual Network Function Placement and Chaining (VNFPC) problem and propose an Integer Linear Programming (ILP) model to solve it. The goal is to minimize required resource allocation, while meeting network flow requirements and constraints. Then, we address scalability of VNFPC problem to solve large instances (i.e., thousands of NFV nodes) by proposing a fixand- optimize-based heuristic algorithm for tackling it. Our algorithm incorporates a Variable Neighborhood Search (VNS) meta-heuristic, for efficiently exploring the placement and chaining solution space. Further, we assess the performance limitations of typical NFV-based deployments and the incurred operational costs of commodity servers and propose an analytical model that accurately predict the operational costs for arbitrary service chain requirements. Then, we develop a general service chain intra-datacenter deployment mechanism (named OCM – Operational Cost Minimization) that considers both the actual performance of the service chains (e.g., CPU requirements) as well as the operational incurred cost. Our novel algorithm is based on an extension of the well-known reduction from weighted matching to min-cost flow problem. Finally, we tackle the problem of monitoring service chains in NFV-based environments. For that, we introduce the DNM (Distributed Network Monitoring) problem and propose an optimization model to solve it. DNM allows service chain segments to be independently monitored, which allows specialized network monitoring requirements to be met in a efficient and coordinated way. Results show that the proposed ILP model for the VNFPC problem leads to a reduction of up to 25% in end-to-end delays (in comparison to chainings observed in traditional infrastructures) and an acceptable resource over-provisioning limited to 4%. Also, we provide strong evidences that our fix-and-optimize based heuristic is able to find feasible, high-quality solutions efficiently, even in scenarios scaling to thousands of VNFs. Further, we provide indepth insights on network performance metrics (such as throughput, CPU utilization and packet processing) and its current limitations while considering typical deployment strategies. Our OCM algorithm reduces significantly operational costs when compared to the de-facto standard placement mechanisms used in Cloud systems. Last, our DNM model allows finer grained network monitoring with limited overheads. By coordinating the placement of monitoring sinks and the forwarding of network monitoring traffic, DNM can reduce the number of monitoring sinks and the network resource consumption (54% lower than a traditional method).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie