Tesis sobre el tema "Batches of task graphs"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Batches of task graphs.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 15 mejores tesis para su investigación sobre el tema "Batches of task graphs".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Toch, Lamiel. "Contributions aux techniques d’ordonnancement sur plates-formes parallèles ou distribuées". Electronic Thesis or Diss., Besançon, 2012. http://www.theses.fr/2012BESA2045.

Texto completo
Resumen
Les travaux présentés dans ce document portent sur l'ordonnancement d'applications parallèles sur des plates-formes parallèles (cluster) ou distribuées (grilles de calcul). Dans nos travaux de recherche nous nous sommes concentrés sur l'ordonnancement d'applications modélisées par un DAG, graphe orienté sans cycle, pour les grilles de calcul et sur l'ordonnancement pour les (cluster, machines multiprocesseurs) de programmes parallèles (jobs parallèles) représentés sous la forme de surface rectangulaire dont les deux dimensions sont le nombre de processeurs requis et la durée d'exécution. Les recherches s'articulent autour de trois grands axes. Le premier axe concerne l'ordonnancement d'un ensemble d'instances d'une application pour les grilles de calcul. Le deuxième axe est l'ordonnancement de jobs parallèles dans les clusters. Le troisième est l'ordonnancement d'un lot de jobs parallèles pour les machines parallèles. Cette thèse apporte des contributions sur les trois axes. La première contribution associée au premier axeest l'étude expérimentale avancée de trois algorithmes pour l'ordonnancement d'un ensemble d'instances d'une application sur une plate-forme hétérogène où les coûts de communication sont négligeables : un algorithme de liste, un algorithme de régime permanent et un algorithme génétique. D'autre part nous apportons l'intégration des communications dans cet algorithme génétique. La deuxième contribution associée au deuxième axe est la conception d'une nouvelle technique d'ordonnancement de jobs parallèles pour les clusters : le pliage de jobs qui utilise la virtualisation des processeurs. La dernière contribution porte sur la conception d'une nouvelletechnique inspirée du domaine des statistiques et du traitement du signal appliquée à l'ordonnancement de jobs parallèles dans une machine multiprocesseur. Enfin nous donnons quelques travaux de recherches qui on été réalisés mais qui n'ont pas abouti à des résultats significatifs pour l'ordonnancement
Works presented in this document tackle scheduling of parallel applications in either parallel (cluster) or distributed (computing grid) platforms. In our researches we were concentrated on either scheduling of applications modeled by a DAG, directed acyclic graph, for computing grid or scheduling of parallel programs (parallel jobs) represented by a rectangular shape whose the two dimensions are the number of requested processors and the execution time. The researches follow three main topics. The first topic concerns the scheduling of a set of instances of an application for computing grid. The second topic deals with the scheduling of parallel jobs inclusters. The third one tackles the scheduling of parallel jobs in multiprocessor machines. We brought contributions on these three topics. The first contribution under the first topic consists of the advanced experimental study of three algorithms for scheduling a set of instances of an application on a heterogeneous platform without communication costs : a list-based algorithm, a steady-state algorithm and genetic algorithm. Moreover we integrate communications in this genetic algorithm. The second contribution under the second topic is the design of a new technique for scheduling parallel jobs in clusters : job folding which uses virtualization of processors. The third contribution deals with a new technique which comes from statistics and signal cessing applied to scheduling of parallel jobs in a multiprocessor machine. Eventually we givesome works that we carried out but which did not give significant results for scheduling
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Dechu, Satish. "Task graphs mapping on to network processors using simulated annealing /". Available to subscribers only, 2007. http://proquest.umi.com/pqdweb?did=1453188941&sid=15&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Negelspach, Greg L. "Grain size management in repetitive task graphs for multiprocessor computer scheduling". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA288575.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chieregato, Federico. "Modelling task execution time in Directed Acyclic Graphs for efficient distributed management". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Buscar texto completo
Resumen
In this thesis, has been shown a framework that predicts the execution time of tasks in Directed Acyclic Graphs (DAG), each task is the smallest unit of work that executes a function over a set of inputs and in this scenario represents a vertex in a DAG. This thesis includes an implementation for extracting profiling information from Apache Spark, as well, an evaluation of the framework for the Spark decision support benchmark TPC-DS and an in-house and completely different DAG runtime system for real-world DAGS, involving computational quantum chemistry applications. Speeding up the execution in Spark or other workflows is an important problem for many real-time applications; since it is impractical to generate a predictive model that considers the actual values of the inputs to tasks, has been explored the use of Surrogates as the number of parents and the mean parent duration of a task. For this reason, this solution takes the name of PRODIGIOUS, Performance modelling of DAGs via surrogate features. Since the duration of the tasks is a float value, have been studied different regression algorithms, tuning the Hyperparameters through GridSearchCV. The main objective of PRODIGIOUS concern, not only to understand if the use of surrogates instead of actual inputs is enough to predict the execution time of tasks of the same DAG type, but also if it is possible to predict the execution time of tasks of different DAG type creating so a DAG agnostic framework that could help scientist and computer engineer making more efficient their workflow. Others agnostic feature chosen were, the core for each task, the RAM of the benchmark, the data access type, and the number of executors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Witt, Carl Philipp. "Predictive Resource Management for Scientific Workflows". Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21608.

Texto completo
Resumen
Um Erkenntnisse aus großen Mengen wissenschaftlicher Rohdaten zu gewinnen, sind komplexe Datenanalysen erforderlich. Scientific Workflows sind ein Ansatz zur Umsetzung solcher Datenanalysen. Um Skalierbarkeit zu erreichen, setzen die meisten Workflow-Management-Systeme auf bereits existierende Lösungen zur Verwaltung verteilter Ressourcen, etwa Batch-Scheduling-Systeme. Die Abschätzung der Ressourcen, die zur Ausführung einzelner Arbeitsschritte benötigt werden, wird dabei immer noch an die Nutzer:innen delegiert. Dies schränkt die Leistung und Benutzerfreundlichkeit von Workflow-Management-Systemen ein, da den Nutzer:innen oft die Zeit, das Fachwissen oder die Anreize fehlen, den Ressourcenverbrauch genau abzuschätzen. Diese Arbeit untersucht, wie die Ressourcennutzung während der Ausführung von Workflows automatisch erlernt werden kann. Im Gegensatz zu früheren Arbeiten werden Scheduling und Vorhersage von Ressourcenverbrauch in einem engeren Zusammenhang betrachtet. Dies bringt verschiedene Herausforderungen mit sich, wie die Quantifizierung der Auswirkungen von Vorhersagefehlern auf die Systemleistung. Die wichtigsten Beiträge dieser Arbeit sind: 1. Eine Literaturübersicht aktueller Ansätze zur Vorhersage von Spitzenspeicherverbrauch mittels maschinellen Lernens im Kontext von Batch-Scheduling-Systemen. 2. Ein Scheduling-Verfahren, das statistische Methoden verwendet, um vorherzusagen, welche Scheduling-Entscheidungen verbessert werden können. 3. Ein Ansatz zur Nutzung von zur Laufzeit gemessenem Spitzenspeicherverbrauch in Vorhersagemodellen, die die fortwährende Optimierung der Ressourcenallokation erlauben. Umfangreiche Simulationsexperimente geben Einblicke in Schlüsseleigenschaften von Scheduling-Heuristiken und Vorhersagemodellen. 4. Ein Vorhersagemodell, das die asymmetrischen Kosten überschätzten und unterschätzten Speicherverbrauchs berücksichtigt, sowie die Folgekosten von Vorhersagefehlern einbezieht.
Scientific experiments produce data at unprecedented volumes and resolutions. For the extraction of insights from large sets of raw data, complex analysis workflows are necessary. Scientific workflows enable such data analyses at scale. To achieve scalability, most workflow management systems are designed as an additional layer on top of distributed resource managers, such as batch schedulers or distributed data processing frameworks. However, like distributed resource managers, they do not automatically determine the amount of resources required for executing individual tasks in a workflow. The status quo is that workflow management systems delegate the challenge of estimating resource usage to the user. This limits the performance and ease-of-use of scientific workflow management systems, as users often lack the time, expertise, or incentives to estimate resource usage accurately. This thesis is an investigation of how to learn and predict resource usage during workflow execution. In contrast to prior work, an integrated perspective on prediction and scheduling is taken, which introduces various challenges, such as quantifying the effects of prediction errors on system performance. The main contributions are: 1. A survey of peak memory usage prediction in batch processing environments. It provides an overview of prior machine learning approaches, commonly used features, evaluation metrics, and data sets. 2. A static workflow scheduling method that uses statistical methods to predict which scheduling decisions can be improved. 3. A feedback-based approach to scheduling and predictive resource allocation, which is extensively evaluated using simulation. The results provide insights into the desirable characteristics of scheduling heuristics and prediction models. 4. A prediction model that reduces memory wastage. The design takes into account the asymmetric costs of overestimation and underestimation, as well as follow up costs of prediction errors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kasinger, Charles D. "A periodic scheduling heuristic for mapping iterative task graphs onto distributed memory multiprocessors". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA286047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Koman, Charles Brian. "A tool for efficient execution and development of repetitive task graphs on a distributed memory multiprocessor". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA305995.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Pop, Ruxandra. "Mapping Concurrent Applications to Multiprocessor Systems with Multithreaded Processors and Network on Chip-Based Interconnections". Licentiate thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64256.

Texto completo
Resumen
Network on Chip (NoC) architectures provide scalable platforms for designing Systems on Chip (SoC) with large number of cores. Developing products and applications using an NoC architecture offers many challenges and opportunities. A tool which can map an application or a set of applications to a given NoC architecture will be essential. In this thesis we first survey current techniques and we present our proposals for mapping and scheduling of concurrent applications to NoCs with multithreaded processors as computational resources. NoC platforms are basically a special class of Multiprocessor Embedded Systems (MPES). Conventional MPES architectures are mostly bus-based and, thus, are exposed to potential difficulties regarding scalability and reusability. There has been a lot of research on MPES development including work on mapping and scheduling of applications. Many of these results can also be applied to NoC platforms. Mapping and scheduling are known to be computationally hard problems. A large range of exact and approximate optimization algorithms have been proposed for solving these problems. The methods include Branch-and–Bound (BB), constructive and transformative heuristics such as List Scheduling (LS), Genetic Algorithms (GA) and various types of Mathematical Programming algorithms. Concurrent applications are able to capture a typical embedded system which is multifunctional. Concurrent applications can be executed on an NoC which provides a large computational power with multiple on-chip computational resources. Improving the time performances of concurrent applications which are running on Network on Chip (NoC) architectures is mainly correlated with the ability of mapping and scheduling methodologies to exploit the Thread Level Parallelism (TLP) of concurrent applications through the available NoC parallelism. Matching the architectural parallelism to the application concurrency for obtaining good performance-cost tradeoffs is  another aspect of the problem. Multithreading is a technique for hiding long latencies of memory accesses, through the overlapped execution of several threads. Recently, Multi-Threaded Processors (MTPs) have been designed providing the architectural infrastructure to concurrently execute multiple threads at hardware level which, usually, results in a very low context switching overhead. Simultaneous Multi-Threaded Processors (SMTPs) are superscalar processor architectures which adaptively exploit the coarse grain and the fine grain parallelism of applications, by simultaneously executing instructions from several thread contexts. In this thesis we make a case for using SMTPs and MTPs as NoC resources and show that such a multiprocessor architecture provides better time performances than an NoC with solely General-purpose Processors (GP). We have developed a methodology for task mapping and scheduling to an NoC with mixed SMTP, MTP and GP resources, which aims to maximize the time performance of concurrent applications and to satisfy their soft deadlines. The developed methodology was evaluated on many configurations of NoC-based platforms with SMTP, MTP and GP resources. The experimental results demonstrate that the use of SMTPs and MTPs in NoC platforms can significantly speed-up applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Gurhem, Jérôme. "Paradigmes de programmation répartie et parallèle utilisant des graphes de tâches pour supercalculateurs post-pétascale". Thesis, Lille, 2021. http://www.theses.fr/2021LILUI005.

Texto completo
Resumen
Depuis le milieu des années 1990, les bibliothèques de transmission de messages sont les technologies les plus utilisées pour développer des applications parallèles et distribuées. Des modèles de programmation basés sur des tâches peuvent être utilisés, par exemple, pour éviter les communications collectives sur toutes les ressources comme les réductions, les diffusions ou les rassemblements en les transformant en multiples opérations avec des tâches. Ensuite, ces opérations peuvent être planifiées par l'ordonnanceur pour placer les données et les calculs de manière à optimiser et réduire les communications de données.L'objectif principal de cette thèse est d'étudier ce que doit être la programmation basée sur des tâches pour des applications scientifiques et de proposer une spécification de cette programmation distribuée et parallèle, en expérimentant avec plusieurs représentations simplifiées d'applications scientifiques importantes pour TOTAL, et de méthodes linéaire classique dense et creuses. Au cours de la thèse, plusieurs langages de programmation et paradigmes sont étudiés. Des méthodes linéaires denses pour résoudre des systèmes linéaires, des séquences de produit matrice vecteur creux et la migration sismique en profondeur pré-empilement de Kirchhoff sont étudiées et implémentées en tant qu'applications basées sur des tâches.Une taxonomie, basée sur plusieurs de ces langages et paradigmes est proposée. Des logiciels ont été développés en utilisant ces modèles de programmation pour chaque application simplifiée. À la suite de ces recherches, une méthodologie pour la programmation de tâches parallèles est proposée, optimisant les mouvements de données en général et, en particulier, pour des applications scientifiques ciblées
Since the middle of the 1990s, message passing libraries are the most used technology to implement parallel and distributed applications. However, they may not be a solution efficient enough on exascale machines since scalability issues will appear due to the increase in computing resources. Task-based programming models can be used, for example, to avoid collective communications along all the resources like reductions, broadcast or gather by transforming them into multiple operations on tasks. Then, these operations can be scheduled by the scheduler to place the data and computations in a way that optimize and reduce the data communications. The main objective of this thesis is to study what must be task-based programming for scientific applications and to propose a specification of such distributed and parallel programming, by experimenting for several simplified representations of important scientific applications for TOTAL, and classical dense and sparse linear methods.During the dissertation, several programming languages and paradigms are studied. Dense linear methods to solve linear systems, sequences of sparse matrix vector product and the Kirchhoff seismic pre-stack depth migration are studied and implemented as task-based applications. A taxonomy, based on several of these languages and paradigms is proposed.Software were developed using these programming models for each simplified application. As a result of these researches, a methodology for parallel task programming is proposed, optimizing data movements, in general, and for targeted scientific applications, in particular
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bouguelia, Sara. "Modèles de dialogue et reconnaissance d'intentions composites dans les conversations Utilisateur-Chatbot orientées tâches". Electronic Thesis or Diss., Lyon 1, 2023. http://www.theses.fr/2023LYO10106.

Texto completo
Resumen
Les Systèmes de Dialogue (ou simplement chatbots) sont très demandés de nos jours. Ils permettent de comprendre les besoins des utilisateurs (ou intentions des utilisateurs), exprimés en langage naturel, et de répondre à ces intentions en invoquant les APIs (Interfaces de Programmation d’Application) appropriées. Les chatbots sont connus pour leur interface facile à utiliser et ils ne nécessitent que l'une des capacités les plus innées des humains qui est l'utilisation du langage naturel. L'amélioration continue de l'Intelligence Artificielle (IA), du Traitement du Langage Naturel (NLP) et du nombre incalculable de dispositifs permettent d'effectuer des tâches réelles (par exemple, faire une réservation) en utilisant des interactions basées sur le langage naturel entre les utilisateurs et un grand nombre de services.Néanmoins, le développement de chatbots est encore à un stade préliminaire, avec plusieurs défis théoriques et techniques non résolus découlant de (i) la variations d'énoncés dans les interactions humain-chatbot en libre échange et (ii) du grand nombre de services logiciels potentiellement inconnus au moment du développement. Les conversations en langage naturel des personnes peuvent être riches, potentiellement ambiguës et exprimer des intentions complexes et dépendantes du contexte. Les techniques traditionnelles de modélisation et d'orchestration de processus et de composition de services sont limitées pour soutenir de telles conversations car elles supposent généralement une attente a priori de quelles informations et applications seront accédées et comment les utilisateurs exploreront ces sources et services. Limiter les conversations à un modèle de processus signifie que nous ne pouvons soutenir qu'une petite fraction de conversations possibles. Bien que les avancées existantes dans les techniques de NLP et d'apprentissage automatique (ML) automatisent diverses tâches telles que la reconnaissance d'intention, la synthèse d'appels API pour prendre en charge une large gamme d'intentions d'utilisateurs potentiellement complexes est encore largement un processus manuel et coûteux.Ce projet de thèse vise à faire avancer la compréhension fondamentale de l'ingénierie des services cognitifs. Dans cette thèse, nous contribuons à des abstractions et des techniques novatrices axées sur la synthèse d'appels API pour soutenir une large gamme d'intentions d'utilisateurs potentiellement complexes. Nous proposons des techniques réutilisables et extensibles pour reconnaître et réaliser des intentions complexes lors des interactions entre humains, chatbots et services. Ces abstractions et techniques visent à débloquer l'intégration transparente et évolutive de conversations basées sur le langage naturel avec des services activés par logiciel
Dialogue Systems (or simply chatbots) are in very high demand these days. They enable the understanding of user needs (or user intents), expressed in natural language, and on fulfilling such intents by invoking the appropriate back-end APIs (Application Programming Interfaces). Chatbots are famed for their easy-to-use interface and gentle learning curve (it only requires one of humans' most innate ability, the use of natural language). The continuous improvement in Artificial Intelligence (AI), Natural Language Processing (NLP), and the countless number of devices allow performing real-world tasks (e.g., making a reservation) by using natural language-based interactions between users and a large number of software enabled services.Nonetheless, chatbot development is still in its preliminary stage, and there are several theoretical and technical challenges that need to be addressed. One of the challenges stems from the wide range of utterance variations in open-end human-chatbot interactions. Additionally, there is a vast space of software services that may be unknown at development time. Natural human conversations can be rich, potentially ambiguous, and express complex and context-dependent intents. Traditional business process and service composition modeling and orchestration techniques are limited to support such conversations because they usually assume a priori expectation of what information and applications will be accessed and how users will explore these sources and services. Limiting conversations to a process model means that we can only support a small fraction of possible conversations. While existing advances in NLP and Machine Learning (ML) techniques automate various tasks such as intent recognition, the synthesis of API calls to support a broad range of potentially complex user intents is still largely a manual, ad-hoc and costly process.This thesis project aims at advancing the fundamental understanding of cognitive services engineering. In this thesis we contribute novel abstractions and techniques focusing on the synthesis of API calls to support a broad range of potentially complex user intents. We propose reusable and extensible techniques to recognize and realize complex intents during humans-chatbots-services interactions. These abstractions and techniques seek to unlock the seamless and scalable integration of natural language-based conversations with software-enabled services
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Lai, Lein-Fu y 賴聯福. "Task-Based Specifications through Conceptual Graphs". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/78938169182813236389.

Texto completo
Resumen
碩士
國立中央大學
資訊及電子工程研究所
83
In this paper, we have proposed the use of conceptual graphs to express task-based specifications in which the specification is driven by the task structure of problem solving knowledge, pieces of the specification can be refined iteratively and verification is performed for a single layer or between layers. Issues in mapping task-based specifications into conceptual graphs are identified, for example, the representation of constraints, the relationship between a task, its constraints and state model, rigid and soft postconditions, the distinction between follow and immediately follow operators, and the composition operator in task state expressions. To alleviate the problems, the notion of demons has been adopted to represent task state expressions as well as state models, constraint overlays have been used for describing the state model, and canonical formation rules have been applied to compose task state expressions. These are illustrated using the problem domain of R1/SOAR. In addition, we proposed the verification for both the model specification and the process specification. Constraint networks, constraint satisfaction, task progression and comparision for specificity are used to verify task-based specifications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Chen, Lu-Ann y 陳履安. "A Study on the Parallel Task Scheduling for Directed Acyclic Graphs". Thesis, 1997. http://ndltd.ncl.edu.tw/handle/59v564.

Texto completo
Resumen
碩士
國立清華大學
資訊科學研究所
85
The DAGs (Directed Acyclic Graphs) are directed graphs with no cycle on it. The nodes on such a graph are partial ordered. Each node and each edge have their weights as computation or communication overheads. Some properties of nodes can be used to help scheduler decide the importance of the nodes. The scheduler used the properties to schedule the entire porojects off-line or during compile time. This kind of scheduling is so called static scheduling or DAG scheduling.   The problem of non-preemptively scheduling a set of n partially ordered tasks on p identical processors is a rich land of research and lots of literature has been produced. However, the higher quality the scheduling results are, the higher time complexity the algorithm has.   The Earliest Time First (ETF) algorithm is the first algorithm which takes communication delays between processes into considerations. Its result is pretty good. Its the complexity is O(n2p), where the n and p are the number of tasks and processors.   Lately a method called as Fast Assignment using Search Technique (FAST) algorithm with takes O(e) complexity was proposed, where e is the number of edges in the input graph. The FAST algorithm uses static scheduling technique as its initial scheduling technique, and then modify its initial scheduling with the local search technique.   In this thesis, a new algorithm with O(e) time complexity for DAG scheduling is proposed. The proposed one uses a new method called Update Parents (UP) and a different priority assignment method for each task from the FAST. A improvenment for the ETF algorithm is also proposed. The improved method reduces the time complexity of ETF to O(n2). Let the running time denotes the time required for a program to complete a algorithm, and scheduled time or makespan denotes the scheduled result generated by a algorithm. In the testbed that processor number is 40 and task number is about 432, the proposed algorithm runs 6% slower than the FAST but the makespan is better by 1.3%. In the same environment, the ETF is slower than the FAST by 677% and the result is better by 1.88%. The improved ETF runs faster than the original one by 35.7%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Fu, Lai Lien y 賴聯福. "Task-Based Conceptual Graphs as a Basis for Automated Software Engineering". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/75973147143239865136.

Texto completo
Resumen
博士
國立中央大學
資訊工程研究所
87
Traditional software paradigm contains two fundamental flaws which exacerbate the maintenance of software systems: (1) there is no technology for managing the knowledge-intensive activities which constitute the software development processes, and (2) maintenance is performed on source code. Therefore, automated software engineering is used to directly obviate both these fundamental flaws through full automation of the compilation process. Automated software engineering provides the automation of software development processes including specification acquisition, verification, automatic programming, and maintenance. The automation avoids people producing and maintaining software, which reduce cost and development times while increasing reliability. As was pointed out by Rich and Waters, there are three technical issues in automatic software engineering. First, the specification language used to communicate users and the system can be a natural language, special-purpose languages, logical formalisms, and very high level languages. Logic is the most powerful formal description language known. As a result, it is reasonable to suppose that it might make a good communication medium between users and the system. Unfortunately, there are two fundamental barriers to the use of logical formalisms: it is computationally intractable and is difficult for users to write and understand. Second, the approach to mapping from the specification into the implementation can be procedural methods, deductive methods, transformation methods, and inspection methods. A major strength of transformational methods is that they provide a very clear representation for certain kinds of programming knowledge. For this reason, transformational methods in some form dominate current research in automatic programming. Finally, the system must have domain knowledge to interpret the terms used by users and programming knowledge to produce programs. Representing and using such knowledge is a major challenge. The knowledge representation should have several properties: (1) expressiveness: the representation must be able to express as many different kinds of knowledge as possible, (2) convenient combination: the combination of different kinds knowledge must be easy to implement, (3) semantic soundness: the representation must be based on a mathematical foundation, and (4) manipulability: the representation must be easy to be manipulated for the purpose of improvement. Conceptual graphs as a general knowledge representation formalism has been widely adopted in knowledge and software engineering. As a first-order logic vehicle, conceptual graphs are powerful in describing static facts, constraints, and relationships in the real world. With its direct mapping to natural languages, the conceptual graph can serve as an intermediate language for translating computer-oriented formalisms to and from natural languages. With its graphic representation and mapping to first order logic, it can serve as a readable and formal specification language. Though conceptual graphs has the potential to be an excellent formalism for automated software engineering, the limitation in performing computation poses significant barriers to the automatic execution of software systems through conceptual graphs. Hence, a method for operationalizing conceptual graphs needs to be worked out. In this dissertation, task-based conceptual graphs (TBCG) is used as a basis for automating the software development processes. We use task-based conceptual graphs to capture and represent a conceptual model for the problem domain. To construct a conceptual model, task-based specification methodology is used to serve as the mechanism to structure the knowledge captured in conceptual models; whereas, conceptual graphs (CGs) are adopted as the formalism to express task-based specifications and to provide a reasoning capability for the purpose of automation. Contextual retrieval mechanism proposes a multiple-viewed approach to help to acquire specifications and elicit information in contexts. To automate the reuse of existing task-based conceptual graphs, contextual retrieval mechanism uses fuzzy similarity to automatically match two surrounding contexts of specifications and utilizes purpose-directed analogy for retrieving reusable specifications. The verification of task-based conceptual graphs is performed on: (1) model specifications of a task by checking the consistency of constraints networks through the constraint satisfaction algorithm, and by repairing constraint violation using the constraint relaxation method; and (2) process specifications of a task by checking the interlevel and intralevel consistency based on operators and rules of inference inherited in conceptual graphs. Once task-based conceptual graphs have been constructed and verified, a blackboard system will automatically transform TBCG specifications into an executable program in CLIPS. We can then perform maintenance directly on the specification rather than on the source code. Small logical changes may have large and complex effects while performing maintenance at the implementation level; whereas, maintenance at the specification level is fairly simple and explainable. Hence, the problems in traditional software paradigm are avoided by automatic reimplementation from the revised specifications. Our approach offers several benefits that are useful for automating software development processes. First, the use of conceptual graphs together with task-based specifications in specifying software requirements helps in capturing richer semantics than that of task-based specifications or conceptual graphs alone. The expressive power of conceptual graphs facilitates the capturing of semantics that task-based specifications is difficult to express. Second, requirements specifications for different views are represented in their conceptual graphical specifications, and are tightly integrated under the general notion of tasks. In addition, artifacts constructed in each model are sharable. Third, contextual retrieval mechanism provides an incremental context acquisition approach to help elicit detailed information of contexts. The efficiency of reusing requirements specification is also increased by excluding out irrelevant contexts to reduce the number of possible candidates for selection. Fourth, the computation of fuzzy similarity not only facilitates the matching between two sets of conceptual graphs but also deals with the uncertainty inherent to contexts that cannot be completely described. Finally, conceptual graphs is operationalized by providing automatic translation into CLIPS. Conceptual graphs can then have the capability to perform computation, solve problems, and simulating processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ojha, Prakhar. "Utilizing Worker Groups And Task Dependencies in Crowdsourcing". Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4265.

Texto completo
Resumen
Crowdsourcing has emerged as a convenient mechanism to collect human judgments on a variety of tasks, ranging from document and image classification to scientific experimentation. However, in recent times crowdsourcing has evolved from solving simpler tasks, like recognizing objects in images, to more complex tasks such as collaborative journalism, language translation, product designing etc. Unlike simpler micro-tasks performed by a single worker, these complex tasks require a group of workers and greater resources. In such scenarios, where groups of participants are the atomic units, it is a non-trivial task to distinguish workers (who contribute positively) from idlers (who do not contribute to group task) among the participants using only group's performance. The first part of this thesis studies the problem of distinguishing workers from idlers, without assuming any prior knowledge of individual skills and considers \groups" as the smallest observable unit for evaluation. We draw upon literature from group-testing and give bounds over minimum number of groups required to identify quality of subsets of individuals with high confidence. We validate our theory experimentally and report insights for the number of workers and idlers that can be identified for a given number of group-tasks with significant probability. In most crowdsourcing applications, there exist dependencies among the pool of Human Intelligence Tasks (HITs) and often in practical scenarios there are far too many HITs available than what can realistically be covered by limited available budget. Estimating the accuracy of automatically constructed Knowledge Graphs (KG) is one such important application. Automatic construction of large knowledge graphs has gained wide popularity in recent times. These KGs, such as NELL, Google Knowledge Vault, etc., consist of thousands of predicate-relations (e.g., is Person, is Mayor Of) and millions of their instances (e.g., (Bill de Blasio, is Mayor Of, New York City)). Estimating accuracy of such KGs is a challenging problem due to their size and diversity. In the second part of this study, we show that standard single-task crowdsourc- ing is sub-optimal and very expensive as it ignores dependencies among various predicates and instances. We propose Relational Crowdsourcing (RelCrowd) to overcome this challenge, where the tasks are created while taking dependencies among predicates and instances into account. We apply this framework in the context of large-scale Knowledge Graph Evaluation (KGEval) and demonstrate its effectiveness through extensive experiments on real-world datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Ravindran, Rajeswaran Chockalingapuram. "Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds". 2012. https://scholarworks.umass.edu/theses/826.

Texto completo
Resumen
Embedded real time applications are often subject to time and energy constraints. Real time applications are usually characterized by logically separable set of tasks with precedence constraints. The computational effort behind each of the task in the system is responsible for a physical functionality of the embedded system. In this work we mainly define theoretical models for relating the quality of the physical func- tionality to the computational load of the tasks and develop optimization problems to maximize the quality of the system subject to various constraints like time and energy. Specifically, the novelties in this work are three fold. This work deals with maximizing the final output quality of a set of precedence constrained tasks whose quality can be expressed with appropriate cost functions. We have developed heuristic scheduling algorithms for maximizing the quality of final output of embedded applications. This work also dealswith the fact that the quality of output of a task in the system has noticeable effect on quality of output of the other dependent tasks in the system. Finally run time characteristics of the tasks are also modeled by simulating a distribution of run times for the tasks, which provides for averaged quality of output for the system rather than un-sampled quality based on arbitrary run times. Many real-time tasks fall into the IRIS (Increased Reward with Increased Service) category. Such tasks can be prematurely terminated at the cost of poorer quality output. In this work, we study the scheduling of IRIS tasks on multiprocessors. IRIS tasks may be dependent, with one task feeding other tasks in a Task Precedence Graph (TPG). Task output quality depends on the quality of the input data as well as on the execution time that is allowed. We study the allocation/scheduling of IRIS TPGs on multiprocessors to maximize output quality. The heuristics developed can effectively reclaim resources when tasks finish earlier than their estimated worst-case execution time. Dynamic voltage scaling is used to manage energy consumption and keep it within specified bounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía