Auswahl der wissenschaftlichen Literatur zum Thema „Batches of task graphs“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Batches of task graphs" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Batches of task graphs"

1

DIAKITÉ, SÉKOU, LORIS MARCHAL, JEAN-MARC NICOD und LAURENT PHILIPPE. „PRACTICAL STEADY-STATE SCHEDULING FOR TREE-SHAPED TASK GRAPHS“. Parallel Processing Letters 21, Nr. 04 (Dezember 2011): 397–412. http://dx.doi.org/10.1142/s0129626411000291.

Der volle Inhalt der Quelle
Annotation:
In this paper, we focus on the problem of scheduling a collection of similar task graphs on a heterogeneous platform, when the task graph is an intree. We rely on steady-state scheduling techniques, and aim at optimizing the throughput of the system. Contrarily to previous studies, we concentrate on practical aspects of steady-state scheduling, when dealing with a collection (or batch) of limited size. We focus here on two optimizations. The first one consists in reducing the processing time of each task graph, thus making steady-state scheduling applicable to smaller batches. The second one consists in degrading a little the optimal-throughput solution to get a simpler solution, more efficient on small batches. We present our optimizations in details, and show that they both help to overcome the limitation of steady-state scheduling: our simulations show that we are able to reach a better efficiency on small batches, to reduce the size of the buffers, and to significantly decrease the processing time of a single task graph (latency).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Xun, Chaogang Zhang, Ying Zhang, Xiangyu Meng, Zhiyuan Zhang, Xin Shi und Tao Song. „IMGG: Integrating Multiple Single-Cell Datasets through Connected Graphs and Generative Adversarial Networks“. International Journal of Molecular Sciences 23, Nr. 4 (14.02.2022): 2082. http://dx.doi.org/10.3390/ijms23042082.

Der volle Inhalt der Quelle
Annotation:
There is a strong need to eliminate batch-specific differences when integrating single-cell RNA-sequencing (scRNA-seq) datasets generated under different experimental conditions for downstream task analysis. Existing batch correction methods usually transform different batches of cells into one preselected “anchor” batch or a low-dimensional embedding space, and cannot take full advantage of useful information from multiple sources. We present a novel framework, called IMGG, i.e., integrating multiple single-cell datasets through connected graphs and generative adversarial networks (GAN) to eliminate nonbiological differences between different batches. Compared with current methods, IMGG shows excellent performance on a variety of evaluation metrics, and the IMGG-corrected gene expression data incorporate features from multiple batches, allowing for downstream tasks such as differential gene expression analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wang, Yue, Ruiqi Xu, Xun Jian, Alexander Zhou und Lei Chen. „Towards distributed bitruss decomposition on bipartite graphs“. Proceedings of the VLDB Endowment 15, Nr. 9 (Mai 2022): 1889–901. http://dx.doi.org/10.14778/3538598.3538610.

Der volle Inhalt der Quelle
Annotation:
Mining cohesive subgraphs on bipartite graphs is an important task. The k -bitruss is one of many popular cohesive subgraph models, which is the maximal subgraph where each edge is contained in at least k butterflies. The bitruss decomposition problem is to find all k -bitrusses for k ≥ 0. Dealing with large graphs is often beyond the capability of a single machine due to its limited memory and computational power, leading to a need for efficiently processing large graphs in a distributed environment. However, all current solutions are for a single machine and a centralized environment, where processors can access the graph or auxiliary indexes randomly and globally. It is difficult to directly deploy such algorithms on a shared-nothing model. In this paper, we propose distributed algorithms for bitruss decomposition. We first propose SC-HBD as the baseline, which uses H -function to define bitruss numbers and computes them iteratively to a fix point in parallel. We then introduce a subgraph-centric peeling method SC-PBD, which peels edges in batches over different butterfly complete subgraphs. We then introduce local indexes on each fragment, study the butterfly-aware edge partition problem including its hardness, and propose an effective partitioner. Finally we present the bitruss butterfly-complete subgraph concept, and divide and conquer DC-BD method with optimization strategies. Extensive experiments show the proposed methods solve graphs with 30 trillion butterflies in 2.5 hours, while existing parallel methods under shared-memory model fail to scale to such large graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jia, Haozhang. „Graph sampling based deep metric learning for cross-view geo-localization“. Journal of Physics: Conference Series 2711, Nr. 1 (01.02.2024): 012004. http://dx.doi.org/10.1088/1742-6596/2711/1/012004.

Der volle Inhalt der Quelle
Annotation:
Abstract Cross-view geo-localization has emerged as a novel computer vision task that has garnered increasing attention. This is primarily attributed to its practical significance in the domains of drone navigation and drone-view localization. Moreover, the work is particularly demanding due to its inherent requirement for cross-domain matching. There are generally two ways to train a neural network to match similar satellite and drone-view images: presentation learning with classifiers and identity loss, and metric learning with pairwise matching within mini-batches. The first takes extra computing and memory costs in large-scale learning, so this paper follows a person-reidentification method called QAConv-GS, and implements a graph sampler to mine the hardest data to form mini-batches, and a QAConv module with extra attention layers appended to compute similarity between image pairs. Batch-wise OHEM triplet loss is then used for model training. With these implementations and adaptions combined, this paper significantly improves the state of the art on the challenging University-1652 dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Angriman, Eugenio, Michał Boroń und Henning Meyerhenke. „A Batch-dynamic Suitor Algorithm for Approximating Maximum Weighted Matching“. ACM Journal of Experimental Algorithmics 27 (31.12.2022): 1–41. http://dx.doi.org/10.1145/3529228.

Der volle Inhalt der Quelle
Annotation:
Matching is a popular combinatorial optimization problem with numerous applications in both commercial and scientific fields. Computing optimal matchings w.r.t. cardinality or weight can be done in polynomial time; still, this task can become infeasible for very large networks. Thus, several approximation algorithms that trade solution quality for a faster running time have been proposed. For networks that change over time, fully dynamic algorithms that efficiently maintain an approximation of the optimal matching after a graph update have been introduced as well. However, no semi- or fully dynamic algorithm for (approximate) maximum weighted matching has been implemented. In this article, we focus on the problem of maintaining a \( 1/2 \) -approximation of a maximum weighted matching (MWM) in fully dynamic graphs. Limitations of existing algorithms for this problem are (i) high constant factors in their time complexity, (ii) the fact that none of them supports batch updates, and (iii) the lack of a practical implementation, meaning that their actual performance on real-world graphs has not been investigated. We propose and implement a new batch-dynamic \( 1/2 \) -approximation algorithm for MWM based on the Suitor algorithm and its local edge domination strategy [Manne and Halappanavar, IPDPS 2014]. We provide a detailed analysis of our algorithm and prove its approximation guarantee. Despite having a worst-case running time of \( \mathcal {O}(n + m) \) for a single graph update, our extensive experimental evaluation shows that our algorithm is much faster in practice. For example, compared to a static recomputation with sequential Suitor , single-edge updates are handled up to \( 10^5\times \) to \( 10^6\times \) faster, while batches of \( 10^4 \) edge updates are handled up to \( 10^2\times \) to \( 10^3\times \) faster.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Xin, Yanyan Shen, Yingxia Shao und Lei Chen. „DUCATI: A Dual-Cache Training System for Graph Neural Networks on Giant Graphs with the GPU“. Proceedings of the ACM on Management of Data 1, Nr. 2 (13.06.2023): 1–24. http://dx.doi.org/10.1145/3589311.

Der volle Inhalt der Quelle
Annotation:
Recently Graph Neural Networks (GNNs) have achieved great success in many applications. The mini-batch training has become the de-facto way to train GNNs on giant graphs. However, the mini-batch generation task is extremely expensive which slows down the whole training process. Researchers have proposed several solutions to accelerate the mini-batch generation, however, they (1) fail to exploit the locality of the adjacency matrix, (2) cannot fully utilize the GPU memory, and (3) suffer from the poor adaptability to diverse workloads. In this work, we propose DUCATI, aDual-Cache system to overcome these drawbacks. In addition to the traditionalNfeat-Cache, DUCATI introduces a newAdj-Cache to further accelerate the mini-batch generation and better utilize GPU memory. DUCATI develops a workload-awareDual-Cache Allocator which adaptively finds the best cache allocation plan under different settings. We compare DUCATI with various GNN training systems on four billion-scale graphs under diverse workload settings. The experimental results show that in terms of training time, DUCATI can achieve up to 3.33 times speedup (2.07 times on average) compared to DGL and up to 1.54 times speedup (1.32 times on average) compared to the state-of-the-artSingle-Cache systems. We also analyze the time-accuracy trade-offs of DUCATI and four state-of-the-art GNN training systems. The analysis results offer users some guidelines on system selection regarding different input sizes and hardware resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Vo, Tham, und Phuc Do. „GOW-Stream: A novel approach of graph-of-words based mixture model for semantic-enhanced text stream clustering“. Intelligent Data Analysis 25, Nr. 5 (15.09.2021): 1211–31. http://dx.doi.org/10.3233/ida-205443.

Der volle Inhalt der Quelle
Annotation:
Recently, rapid growth of social networks and online news resources from Internet have made text stream clustering become an insufficient application in multiple domains (e.g.: text retrieval diversification, social event detection, text summarization, etc.) Different from traditional static text clustering approach, text stream clustering task has specific key challenges related to the rapid change of topics/clusters and high-velocity of coming streaming document batches. Recent well-known model-based text stream clustering models, such as: DTM, DCT, MStream, etc. are considered as word-independent evaluation approach which means largely ignoring the relations between words while sampling clusters/topics. It definitely leads to the decrease of overall model accuracy performance, especially for short-length text documents such as comments, microblogs, etc. in social networks. To tackle these existing problems, in this paper we propose a novel approach of graph-of-words (GOWs) based text stream clustering, called GOW-Stream. The application of common GOWs which are generated from each document batch while sampling clusters/topics can support to overcome the word-independent evaluation challenge. Our proposed GOW-Stream is promising to significantly achieve better text stream clustering performance than recent state-of-the-art baselines. Extensive experiments on multiple benchmark real-world datasets demonstrate the effectiveness of our proposed model in both accuracy and time-consuming performances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Da San Martino, Giovanni, Alessandro Sperduti, Fabio Aiolli und Alessandro Moschitti. „Efficient Online Learning for Mapping Kernels on Linguistic Structures“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3421–28. http://dx.doi.org/10.1609/aaai.v33i01.33013421.

Der volle Inhalt der Quelle
Annotation:
Kernel methods are popular and effective techniques for learning on structured data, such as trees and graphs. One of their major drawbacks is the computational cost related to making a prediction on an example, which manifests in the classification phase for batch kernel methods, and especially in online learning algorithms. In this paper, we analyze how to speed up the prediction when the kernel function is an instance of the Mapping Kernels, a general framework for specifying kernels for structured data which extends the popular convolution kernel framework. We theoretically study the general model, derive various optimization strategies and show how to apply them to popular kernels for structured data. Additionally, we derive a reliable empirical evidence on semantic role labeling task, which is a natural language classification task, highly dependent on syntactic trees. The results show that our faster approach can clearly improve on standard kernel-based SVMs, which cannot run on very large datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Luo, Haoran, Haihong E, Yuhao Yang, Gengxian Zhou, Yikai Guo, Tianyu Yao, Zichen Tang, Xueyuan Lin und Kaiyang Wan. „NQE: N-ary Query Embedding for Complex Query Answering over Hyper-Relational Knowledge Graphs“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 4543–51. http://dx.doi.org/10.1609/aaai.v37i4.25576.

Der volle Inhalt der Quelle
Annotation:
Complex query answering (CQA) is an essential task for multi-hop and logical reasoning on knowledge graphs (KGs). Currently, most approaches are limited to queries among binary relational facts and pay less attention to n-ary facts (n≥2) containing more than two entities, which are more prevalent in the real world. Moreover, previous CQA methods can only make predictions for a few given types of queries and cannot be flexibly extended to more complex logical queries, which significantly limits their applications. To overcome these challenges, in this work, we propose a novel N-ary Query Embedding (NQE) model for CQA over hyper-relational knowledge graphs (HKGs), which include massive n-ary facts. The NQE utilizes a dual-heterogeneous Transformer encoder and fuzzy logic theory to satisfy all n-ary FOL queries, including existential quantifiers (∃), conjunction (∧), disjunction (∨), and negation (¬). We also propose a parallel processing algorithm that can train or predict arbitrary n-ary FOL queries in a single batch, regardless of the kind of each query, with good flexibility and extensibility. In addition, we generate a new CQA dataset WD50K-NFOL, including diverse n-ary FOL queries over WD50K. Experimental results on WD50K-NFOL and other standard CQA datasets show that NQE is the state-of-the-art CQA method over HKGs with good generalization capability. Our code and dataset are publicly available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Auerbach, Joshua, David F. Bacon, Rachid Guerraoui, Jesper Honig Spring und Jan Vitek. „Flexible task graphs“. ACM SIGPLAN Notices 43, Nr. 7 (27.06.2008): 1–11. http://dx.doi.org/10.1145/1379023.1375659.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Batches of task graphs"

1

Toch, Lamiel. „Contributions aux techniques d’ordonnancement sur plates-formes parallèles ou distribuées“. Electronic Thesis or Diss., Besançon, 2012. http://www.theses.fr/2012BESA2045.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés dans ce document portent sur l'ordonnancement d'applications parallèles sur des plates-formes parallèles (cluster) ou distribuées (grilles de calcul). Dans nos travaux de recherche nous nous sommes concentrés sur l'ordonnancement d'applications modélisées par un DAG, graphe orienté sans cycle, pour les grilles de calcul et sur l'ordonnancement pour les (cluster, machines multiprocesseurs) de programmes parallèles (jobs parallèles) représentés sous la forme de surface rectangulaire dont les deux dimensions sont le nombre de processeurs requis et la durée d'exécution. Les recherches s'articulent autour de trois grands axes. Le premier axe concerne l'ordonnancement d'un ensemble d'instances d'une application pour les grilles de calcul. Le deuxième axe est l'ordonnancement de jobs parallèles dans les clusters. Le troisième est l'ordonnancement d'un lot de jobs parallèles pour les machines parallèles. Cette thèse apporte des contributions sur les trois axes. La première contribution associée au premier axeest l'étude expérimentale avancée de trois algorithmes pour l'ordonnancement d'un ensemble d'instances d'une application sur une plate-forme hétérogène où les coûts de communication sont négligeables : un algorithme de liste, un algorithme de régime permanent et un algorithme génétique. D'autre part nous apportons l'intégration des communications dans cet algorithme génétique. La deuxième contribution associée au deuxième axe est la conception d'une nouvelle technique d'ordonnancement de jobs parallèles pour les clusters : le pliage de jobs qui utilise la virtualisation des processeurs. La dernière contribution porte sur la conception d'une nouvelletechnique inspirée du domaine des statistiques et du traitement du signal appliquée à l'ordonnancement de jobs parallèles dans une machine multiprocesseur. Enfin nous donnons quelques travaux de recherches qui on été réalisés mais qui n'ont pas abouti à des résultats significatifs pour l'ordonnancement
Works presented in this document tackle scheduling of parallel applications in either parallel (cluster) or distributed (computing grid) platforms. In our researches we were concentrated on either scheduling of applications modeled by a DAG, directed acyclic graph, for computing grid or scheduling of parallel programs (parallel jobs) represented by a rectangular shape whose the two dimensions are the number of requested processors and the execution time. The researches follow three main topics. The first topic concerns the scheduling of a set of instances of an application for computing grid. The second topic deals with the scheduling of parallel jobs inclusters. The third one tackles the scheduling of parallel jobs in multiprocessor machines. We brought contributions on these three topics. The first contribution under the first topic consists of the advanced experimental study of three algorithms for scheduling a set of instances of an application on a heterogeneous platform without communication costs : a list-based algorithm, a steady-state algorithm and genetic algorithm. Moreover we integrate communications in this genetic algorithm. The second contribution under the second topic is the design of a new technique for scheduling parallel jobs in clusters : job folding which uses virtualization of processors. The third contribution deals with a new technique which comes from statistics and signal cessing applied to scheduling of parallel jobs in a multiprocessor machine. Eventually we givesome works that we carried out but which did not give significant results for scheduling
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dechu, Satish. „Task graphs mapping on to network processors using simulated annealing /“. Available to subscribers only, 2007. http://proquest.umi.com/pqdweb?did=1453188941&sid=15&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Negelspach, Greg L. „Grain size management in repetitive task graphs for multiprocessor computer scheduling“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA288575.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chieregato, Federico. „Modelling task execution time in Directed Acyclic Graphs for efficient distributed management“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Den vollen Inhalt der Quelle finden
Annotation:
In this thesis, has been shown a framework that predicts the execution time of tasks in Directed Acyclic Graphs (DAG), each task is the smallest unit of work that executes a function over a set of inputs and in this scenario represents a vertex in a DAG. This thesis includes an implementation for extracting profiling information from Apache Spark, as well, an evaluation of the framework for the Spark decision support benchmark TPC-DS and an in-house and completely different DAG runtime system for real-world DAGS, involving computational quantum chemistry applications. Speeding up the execution in Spark or other workflows is an important problem for many real-time applications; since it is impractical to generate a predictive model that considers the actual values of the inputs to tasks, has been explored the use of Surrogates as the number of parents and the mean parent duration of a task. For this reason, this solution takes the name of PRODIGIOUS, Performance modelling of DAGs via surrogate features. Since the duration of the tasks is a float value, have been studied different regression algorithms, tuning the Hyperparameters through GridSearchCV. The main objective of PRODIGIOUS concern, not only to understand if the use of surrogates instead of actual inputs is enough to predict the execution time of tasks of the same DAG type, but also if it is possible to predict the execution time of tasks of different DAG type creating so a DAG agnostic framework that could help scientist and computer engineer making more efficient their workflow. Others agnostic feature chosen were, the core for each task, the RAM of the benchmark, the data access type, and the number of executors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Witt, Carl Philipp. „Predictive Resource Management for Scientific Workflows“. Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21608.

Der volle Inhalt der Quelle
Annotation:
Um Erkenntnisse aus großen Mengen wissenschaftlicher Rohdaten zu gewinnen, sind komplexe Datenanalysen erforderlich. Scientific Workflows sind ein Ansatz zur Umsetzung solcher Datenanalysen. Um Skalierbarkeit zu erreichen, setzen die meisten Workflow-Management-Systeme auf bereits existierende Lösungen zur Verwaltung verteilter Ressourcen, etwa Batch-Scheduling-Systeme. Die Abschätzung der Ressourcen, die zur Ausführung einzelner Arbeitsschritte benötigt werden, wird dabei immer noch an die Nutzer:innen delegiert. Dies schränkt die Leistung und Benutzerfreundlichkeit von Workflow-Management-Systemen ein, da den Nutzer:innen oft die Zeit, das Fachwissen oder die Anreize fehlen, den Ressourcenverbrauch genau abzuschätzen. Diese Arbeit untersucht, wie die Ressourcennutzung während der Ausführung von Workflows automatisch erlernt werden kann. Im Gegensatz zu früheren Arbeiten werden Scheduling und Vorhersage von Ressourcenverbrauch in einem engeren Zusammenhang betrachtet. Dies bringt verschiedene Herausforderungen mit sich, wie die Quantifizierung der Auswirkungen von Vorhersagefehlern auf die Systemleistung. Die wichtigsten Beiträge dieser Arbeit sind: 1. Eine Literaturübersicht aktueller Ansätze zur Vorhersage von Spitzenspeicherverbrauch mittels maschinellen Lernens im Kontext von Batch-Scheduling-Systemen. 2. Ein Scheduling-Verfahren, das statistische Methoden verwendet, um vorherzusagen, welche Scheduling-Entscheidungen verbessert werden können. 3. Ein Ansatz zur Nutzung von zur Laufzeit gemessenem Spitzenspeicherverbrauch in Vorhersagemodellen, die die fortwährende Optimierung der Ressourcenallokation erlauben. Umfangreiche Simulationsexperimente geben Einblicke in Schlüsseleigenschaften von Scheduling-Heuristiken und Vorhersagemodellen. 4. Ein Vorhersagemodell, das die asymmetrischen Kosten überschätzten und unterschätzten Speicherverbrauchs berücksichtigt, sowie die Folgekosten von Vorhersagefehlern einbezieht.
Scientific experiments produce data at unprecedented volumes and resolutions. For the extraction of insights from large sets of raw data, complex analysis workflows are necessary. Scientific workflows enable such data analyses at scale. To achieve scalability, most workflow management systems are designed as an additional layer on top of distributed resource managers, such as batch schedulers or distributed data processing frameworks. However, like distributed resource managers, they do not automatically determine the amount of resources required for executing individual tasks in a workflow. The status quo is that workflow management systems delegate the challenge of estimating resource usage to the user. This limits the performance and ease-of-use of scientific workflow management systems, as users often lack the time, expertise, or incentives to estimate resource usage accurately. This thesis is an investigation of how to learn and predict resource usage during workflow execution. In contrast to prior work, an integrated perspective on prediction and scheduling is taken, which introduces various challenges, such as quantifying the effects of prediction errors on system performance. The main contributions are: 1. A survey of peak memory usage prediction in batch processing environments. It provides an overview of prior machine learning approaches, commonly used features, evaluation metrics, and data sets. 2. A static workflow scheduling method that uses statistical methods to predict which scheduling decisions can be improved. 3. A feedback-based approach to scheduling and predictive resource allocation, which is extensively evaluated using simulation. The results provide insights into the desirable characteristics of scheduling heuristics and prediction models. 4. A prediction model that reduces memory wastage. The design takes into account the asymmetric costs of overestimation and underestimation, as well as follow up costs of prediction errors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kasinger, Charles D. „A periodic scheduling heuristic for mapping iterative task graphs onto distributed memory multiprocessors“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA286047.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Koman, Charles Brian. „A tool for efficient execution and development of repetitive task graphs on a distributed memory multiprocessor“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA305995.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pop, Ruxandra. „Mapping Concurrent Applications to Multiprocessor Systems with Multithreaded Processors and Network on Chip-Based Interconnections“. Licentiate thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64256.

Der volle Inhalt der Quelle
Annotation:
Network on Chip (NoC) architectures provide scalable platforms for designing Systems on Chip (SoC) with large number of cores. Developing products and applications using an NoC architecture offers many challenges and opportunities. A tool which can map an application or a set of applications to a given NoC architecture will be essential. In this thesis we first survey current techniques and we present our proposals for mapping and scheduling of concurrent applications to NoCs with multithreaded processors as computational resources. NoC platforms are basically a special class of Multiprocessor Embedded Systems (MPES). Conventional MPES architectures are mostly bus-based and, thus, are exposed to potential difficulties regarding scalability and reusability. There has been a lot of research on MPES development including work on mapping and scheduling of applications. Many of these results can also be applied to NoC platforms. Mapping and scheduling are known to be computationally hard problems. A large range of exact and approximate optimization algorithms have been proposed for solving these problems. The methods include Branch-and–Bound (BB), constructive and transformative heuristics such as List Scheduling (LS), Genetic Algorithms (GA) and various types of Mathematical Programming algorithms. Concurrent applications are able to capture a typical embedded system which is multifunctional. Concurrent applications can be executed on an NoC which provides a large computational power with multiple on-chip computational resources. Improving the time performances of concurrent applications which are running on Network on Chip (NoC) architectures is mainly correlated with the ability of mapping and scheduling methodologies to exploit the Thread Level Parallelism (TLP) of concurrent applications through the available NoC parallelism. Matching the architectural parallelism to the application concurrency for obtaining good performance-cost tradeoffs is  another aspect of the problem. Multithreading is a technique for hiding long latencies of memory accesses, through the overlapped execution of several threads. Recently, Multi-Threaded Processors (MTPs) have been designed providing the architectural infrastructure to concurrently execute multiple threads at hardware level which, usually, results in a very low context switching overhead. Simultaneous Multi-Threaded Processors (SMTPs) are superscalar processor architectures which adaptively exploit the coarse grain and the fine grain parallelism of applications, by simultaneously executing instructions from several thread contexts. In this thesis we make a case for using SMTPs and MTPs as NoC resources and show that such a multiprocessor architecture provides better time performances than an NoC with solely General-purpose Processors (GP). We have developed a methodology for task mapping and scheduling to an NoC with mixed SMTP, MTP and GP resources, which aims to maximize the time performance of concurrent applications and to satisfy their soft deadlines. The developed methodology was evaluated on many configurations of NoC-based platforms with SMTP, MTP and GP resources. The experimental results demonstrate that the use of SMTPs and MTPs in NoC platforms can significantly speed-up applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gurhem, Jérôme. „Paradigmes de programmation répartie et parallèle utilisant des graphes de tâches pour supercalculateurs post-pétascale“. Thesis, Lille, 2021. http://www.theses.fr/2021LILUI005.

Der volle Inhalt der Quelle
Annotation:
Depuis le milieu des années 1990, les bibliothèques de transmission de messages sont les technologies les plus utilisées pour développer des applications parallèles et distribuées. Des modèles de programmation basés sur des tâches peuvent être utilisés, par exemple, pour éviter les communications collectives sur toutes les ressources comme les réductions, les diffusions ou les rassemblements en les transformant en multiples opérations avec des tâches. Ensuite, ces opérations peuvent être planifiées par l'ordonnanceur pour placer les données et les calculs de manière à optimiser et réduire les communications de données.L'objectif principal de cette thèse est d'étudier ce que doit être la programmation basée sur des tâches pour des applications scientifiques et de proposer une spécification de cette programmation distribuée et parallèle, en expérimentant avec plusieurs représentations simplifiées d'applications scientifiques importantes pour TOTAL, et de méthodes linéaire classique dense et creuses. Au cours de la thèse, plusieurs langages de programmation et paradigmes sont étudiés. Des méthodes linéaires denses pour résoudre des systèmes linéaires, des séquences de produit matrice vecteur creux et la migration sismique en profondeur pré-empilement de Kirchhoff sont étudiées et implémentées en tant qu'applications basées sur des tâches.Une taxonomie, basée sur plusieurs de ces langages et paradigmes est proposée. Des logiciels ont été développés en utilisant ces modèles de programmation pour chaque application simplifiée. À la suite de ces recherches, une méthodologie pour la programmation de tâches parallèles est proposée, optimisant les mouvements de données en général et, en particulier, pour des applications scientifiques ciblées
Since the middle of the 1990s, message passing libraries are the most used technology to implement parallel and distributed applications. However, they may not be a solution efficient enough on exascale machines since scalability issues will appear due to the increase in computing resources. Task-based programming models can be used, for example, to avoid collective communications along all the resources like reductions, broadcast or gather by transforming them into multiple operations on tasks. Then, these operations can be scheduled by the scheduler to place the data and computations in a way that optimize and reduce the data communications. The main objective of this thesis is to study what must be task-based programming for scientific applications and to propose a specification of such distributed and parallel programming, by experimenting for several simplified representations of important scientific applications for TOTAL, and classical dense and sparse linear methods.During the dissertation, several programming languages and paradigms are studied. Dense linear methods to solve linear systems, sequences of sparse matrix vector product and the Kirchhoff seismic pre-stack depth migration are studied and implemented as task-based applications. A taxonomy, based on several of these languages and paradigms is proposed.Software were developed using these programming models for each simplified application. As a result of these researches, a methodology for parallel task programming is proposed, optimizing data movements, in general, and for targeted scientific applications, in particular
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bouguelia, Sara. „Modèles de dialogue et reconnaissance d'intentions composites dans les conversations Utilisateur-Chatbot orientées tâches“. Electronic Thesis or Diss., Lyon 1, 2023. http://www.theses.fr/2023LYO10106.

Der volle Inhalt der Quelle
Annotation:
Les Systèmes de Dialogue (ou simplement chatbots) sont très demandés de nos jours. Ils permettent de comprendre les besoins des utilisateurs (ou intentions des utilisateurs), exprimés en langage naturel, et de répondre à ces intentions en invoquant les APIs (Interfaces de Programmation d’Application) appropriées. Les chatbots sont connus pour leur interface facile à utiliser et ils ne nécessitent que l'une des capacités les plus innées des humains qui est l'utilisation du langage naturel. L'amélioration continue de l'Intelligence Artificielle (IA), du Traitement du Langage Naturel (NLP) et du nombre incalculable de dispositifs permettent d'effectuer des tâches réelles (par exemple, faire une réservation) en utilisant des interactions basées sur le langage naturel entre les utilisateurs et un grand nombre de services.Néanmoins, le développement de chatbots est encore à un stade préliminaire, avec plusieurs défis théoriques et techniques non résolus découlant de (i) la variations d'énoncés dans les interactions humain-chatbot en libre échange et (ii) du grand nombre de services logiciels potentiellement inconnus au moment du développement. Les conversations en langage naturel des personnes peuvent être riches, potentiellement ambiguës et exprimer des intentions complexes et dépendantes du contexte. Les techniques traditionnelles de modélisation et d'orchestration de processus et de composition de services sont limitées pour soutenir de telles conversations car elles supposent généralement une attente a priori de quelles informations et applications seront accédées et comment les utilisateurs exploreront ces sources et services. Limiter les conversations à un modèle de processus signifie que nous ne pouvons soutenir qu'une petite fraction de conversations possibles. Bien que les avancées existantes dans les techniques de NLP et d'apprentissage automatique (ML) automatisent diverses tâches telles que la reconnaissance d'intention, la synthèse d'appels API pour prendre en charge une large gamme d'intentions d'utilisateurs potentiellement complexes est encore largement un processus manuel et coûteux.Ce projet de thèse vise à faire avancer la compréhension fondamentale de l'ingénierie des services cognitifs. Dans cette thèse, nous contribuons à des abstractions et des techniques novatrices axées sur la synthèse d'appels API pour soutenir une large gamme d'intentions d'utilisateurs potentiellement complexes. Nous proposons des techniques réutilisables et extensibles pour reconnaître et réaliser des intentions complexes lors des interactions entre humains, chatbots et services. Ces abstractions et techniques visent à débloquer l'intégration transparente et évolutive de conversations basées sur le langage naturel avec des services activés par logiciel
Dialogue Systems (or simply chatbots) are in very high demand these days. They enable the understanding of user needs (or user intents), expressed in natural language, and on fulfilling such intents by invoking the appropriate back-end APIs (Application Programming Interfaces). Chatbots are famed for their easy-to-use interface and gentle learning curve (it only requires one of humans' most innate ability, the use of natural language). The continuous improvement in Artificial Intelligence (AI), Natural Language Processing (NLP), and the countless number of devices allow performing real-world tasks (e.g., making a reservation) by using natural language-based interactions between users and a large number of software enabled services.Nonetheless, chatbot development is still in its preliminary stage, and there are several theoretical and technical challenges that need to be addressed. One of the challenges stems from the wide range of utterance variations in open-end human-chatbot interactions. Additionally, there is a vast space of software services that may be unknown at development time. Natural human conversations can be rich, potentially ambiguous, and express complex and context-dependent intents. Traditional business process and service composition modeling and orchestration techniques are limited to support such conversations because they usually assume a priori expectation of what information and applications will be accessed and how users will explore these sources and services. Limiting conversations to a process model means that we can only support a small fraction of possible conversations. While existing advances in NLP and Machine Learning (ML) techniques automate various tasks such as intent recognition, the synthesis of API calls to support a broad range of potentially complex user intents is still largely a manual, ad-hoc and costly process.This thesis project aims at advancing the fundamental understanding of cognitive services engineering. In this thesis we contribute novel abstractions and techniques focusing on the synthesis of API calls to support a broad range of potentially complex user intents. We propose reusable and extensible techniques to recognize and realize complex intents during humans-chatbots-services interactions. These abstractions and techniques seek to unlock the seamless and scalable integration of natural language-based conversations with software-enabled services
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Batches of task graphs"

1

Shukla, Shridhar B. Real-time execution control of task-level data-flow graphs using a compile-time approach. Monterey, Calif: Naval Postgraduate School, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Brooks, Colton. Ielts Writing Task 1 - Data, Charts, Graphs and Letters. Lulu Press, Inc., 2015.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Worthwhile IELTS Writing TASK-1: GRAPHS,TABLES,DIAGRAMS,CHARTS &FIGURES. Rana Books India, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brooks, Colton. Ielts Writing Task 1 - Data, Charts, Graphs and Letters - Fill the Gap. Lulu Press, Inc., 2015.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

HOSSAIN, Delwer. IELTS GRAPH : 200 Samples from Past Exam : IELTS ACADEMIC WRITING TASK 1: 200 Practice Test with Answer, Bar and Line Graphs, Pie Charts, Maps and Tables. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

HOSSAIN, Delwer. IELTS GRAPH : 200 Samples from Past Exam : IELTS ACADEMIC WRITING TASK 1: 200 Practice Test with Answer, Bar and Line Graphs, Pie Charts, Maps and Tables. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

HOSSAIN, Delwer. IELTS GRAPH : 200 Samples from Past Exam : IELTS ACADEMIC WRITING TASK 1: 200 Practice Test with Answer, Bar and Line Graphs, Pie Charts, Maps and Tables. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Consultants, Ielts Writing, und Marc Roche. IELTS Writing Masterclass 8. 5. Master IELTS Writing Academic + General Task 1 and 2, Including Graphs, Letters, Essay Writing and Grammar for IELTS Academic and General Training: IELTS Writing Originals ©. Independently Published, 2020.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Consultants, Ielts Writing, und Marc Roche. IELTS Writing Masterclass 8. 5. Master IELTS Writing Academic + General Task 1 and 2, Including Graphs, Letters, Essay Writing and Grammar for IELTS Academic and General Training: IELTS Writing Originals ©. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Little, Danity. How Women Executives Succeed. Greenwood Publishing Group, Inc., 1994. http://dx.doi.org/10.5040/9798400667008.

Der volle Inhalt der Quelle
Annotation:
The significance of this study on women executives is twofold: one, the book is about women in the public sector, and two, it is written by a woman in the executive service of the government itself. The treatise is a well-documented study of seventy-eight women executives who advanced into the upper reaches of the government executive service. The work analyzes the significant experiences, individuals, developmental stages, and barriers that these women encountered. It provides constructive information for women employees, women managers, and managers of women and minorities. The introductory chapters review learning theories and models, literature, and data collected. The book then proceeds to its main theme, the experiences and lessons of SES women. Various supervisory experiences in task forces, projects, and turning around an organization are analyzed. Role models, bosses, and mentors and their impact is detailed. Successful handling of an executive job, balancing life and work, and dealing with invisible barriers are also addressed. The book concludes with 100 Steps to the Top. The original survey questionnaire, key charts, and graphs are included. This book will be beneficial to human resource professionals and for inclusion in courses in human resource management, women’s studies, and a worthwhile addition to college and university collections.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Batches of task graphs"

1

Diakité, Sékou, Loris Marchal, Jean-Marc Nicod und Laurent Philippe. „Steady-State for Batches of Identical Task Trees“. In Lecture Notes in Computer Science, 203–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03869-3_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Eberhart, Aaron, Cogan Shimizu, Christopher Stevens, Pascal Hitzler, Christopher W. Myers und Benji Maruyama. „A Domain Ontology for Task Instructions“. In Knowledge Graphs and Semantic Web, 1–13. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65384-2_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Brézillon, Patrick. „Task-Realization Models in Contextual Graphs“. In Modeling and Using Context, 55–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11508373_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Finta, Lucian, und Zhen Liu. „Makespan minimization of task graphs with random task running times“. In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 125–38. Providence, Rhode Island: American Mathematical Society, 1995. http://dx.doi.org/10.1090/dimacs/021/10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Miranda-Jiménez, Sabino, Alexander Gelbukh und Grigori Sidorov. „Summarizing Conceptual Graphs for Automatic Summarization Task“. In Conceptual Structures for STEM Research and Education, 245–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35786-2_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Boeres, Cristina, Aline Nascimento⋆ und Vinod E. F. Rebello. „Scheduling Arbitrary Task Graphs on LogP Machines“. In Euro-Par’99 Parallel Processing, 340–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_44.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Klijn, Eva L., Felix Mannhardt und Dirk Fahland. „Aggregating Event Knowledge Graphs for Task Analysis“. In Lecture Notes in Business Information Processing, 493–505. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27815-0_36.

Der volle Inhalt der Quelle
Annotation:
AbstractAggregation of event data is a key operation in process mining for revealing behavioral features of processes for analysis. It has primarily been studied over sequences of events in event logs. The data model of event knowledge graphs enables new analysis questions requiring new forms of aggregation. We focus on analyzing task executions in event knowledge graphs. We show that existing aggregation operations are inadequate and propose new aggregation operations, formulated as query operators over labeled property graphs. We show on the BPIC’17 dataset that the new aggregation operations allow gaining new insights into differences in task executions, actor behavior, and work division.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Baskiyar, Sanjeev, und Christopher Dickinson. „Scheduling Directed A-Cyclic Task Graphs on Heterogeneous Processors Using Task Duplication“. In High Performance Computing - HiPC 2003, 259–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-24596-4_28.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gelenbe, Erol. „Critical Path Length of Large Acyclic Task Graphs“. In Parallel Computing on Distributed Memory Multiprocessors, 195–203. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58066-6_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Löwe, Welf, Wolf Zimmermann, Sven Dickert und Jörn Eisenbiegler. „Source Code and Task Graphs in Program Optimization“. In High-Performance Computing and Networking, 273–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48228-8_28.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Batches of task graphs"

1

Zhu, Wenhao, Tianyu Wen, Guojie Song, Xiaojun Ma und Liang Wang. „Hierarchical Transformer for Scalable Graph Learning“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/523.

Der volle Inhalt der Quelle
Annotation:
Graph Transformer is gaining increasing attention in the field of machine learning and has demonstrated state-of-the-art performance on benchmarks for graph representation learning. However, as current implementations of Graph Transformer primarily focus on learning representations of small-scale graphs, the quadratic complexity of the global self-attention mechanism presents a challenge for full-batch training when applied to larger graphs. Additionally, conventional sampling-based methods fail to capture necessary high-level contextual information, resulting in a significant loss of performance. In this paper, we introduce the Hierarchical Scalable Graph Transformer (HSGT) as a solution to these challenges. HSGT successfully scales the Transformer architecture to node representation learning tasks on large-scale graphs, while maintaining high performance. By utilizing graph hierarchies constructed through coarsening techniques, HSGT efficiently updates and stores multi-scale information in node embeddings at different levels. Together with sampling-based training methods, HSGT effectively captures and aggregates multi-level information on the hierarchical graph using only Transformer blocks. Empirical evaluations demonstrate that HSGT achieves state-of-the-art performance on large-scale benchmarks with graphs containing millions of nodes with high efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Auerbach, Joshua, David F. Bacon, Rachid Guerraoui, Jesper Honig Spring und Jan Vitek. „Flexible task graphs“. In the 2008 ACM SIGPLAN-SIGBED conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1375657.1375659.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yang, Zhen, Tinglin Huang, Ming Ding, Yuxiao Dong, Rex Ying, Yukuo Cen, Yangliao Geng und Jie Tang. „BatchSampler: Sampling Mini-Batches for Contrastive Learning in Vision, Language, and Graphs“. In KDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3580305.3599263.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Jinqing, Xiaojun Chen, Dakui Wang und Yuwei Li. „Enhancing Label Representations with Relational Inductive Bias Constraint for Fine-Grained Entity Typing“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/529.

Der volle Inhalt der Quelle
Annotation:
Fine-Grained Entity Typing (FGET) is a task that aims at classifying an entity mention into a wide range of entity label types. Recent researches improve the task performance by imposing the label-relational inductive bias based on the hierarchy of labels or label co-occurrence graph. However, they usually overlook explicit interactions between instances and labels which may limit the capability of label representations. Therefore, we propose a novel method based on a two-phase graph network for the FGET task to enhance the label representations, via imposing the relational inductive biases of instance-to-label and label-to-label. In the phase 1, instance features will be introduced into label representations to make the label representations more representative. In the phase 2, interactions of labels will capture dependency relationships among them thus make label representations more smooth. During prediction, we introduce a pseudo-label generator for the construction of the two-phase graph. The input instances differ from batch to batch so that the label representations are dynamic. Experiments on three public datasets verify the effectiveness and stability of our proposed method and achieve state-of-the-art results on their testing sets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Agrawal, Kunal, Charles E. Leiserson und Jim Sukha. „Executing task graphs using work-stealing“. In 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Long, Douglas L., und Lori A. Clarke. „Task interaction graphs for concurrency analysis“. In the 11th international conference. New York, New York, USA: ACM Press, 1989. http://dx.doi.org/10.1145/74587.74592.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Carpov, Sergiu, Jacques Carlier, Dritan Nace und Renaud Sirdey. „Probabilistic Parameters of Conditional Task Graphs“. In 2011 14th International Conference on Network-Based Information Systems (NBiS). IEEE, 2011. http://dx.doi.org/10.1109/nbis.2011.63.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Szymanek, Radoslaw, und Krzysztof Krzysztof. „Partial task assignment of task graphs under heterogeneous resource constraints“. In the 40th conference. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/775832.775895.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Dokulil, Jiri, und Jana Katreniakova. „Visualization of Open Community Runtime Task Graphs“. In 2017 21st International Conference on Information Visualisation (IV). IEEE, 2017. http://dx.doi.org/10.1109/iv.2017.31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sbîrlea, Dragos, Zoran Budimlić und Vivek Sarkar. „Bounded memory scheduling of dynamic task graphs“. In the 23rd international conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2628071.2628090.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Batches of task graphs"

1

Mbabzi, Kikundwa Emma. Standardisation of Staff Training to Increase Efficiency. Purdue University, November 2021. http://dx.doi.org/10.5703/1288284317427.

Der volle Inhalt der Quelle
Annotation:
In any industry or organization, personnel training is emphasized with reference to National Regulatory Authorities (NRAs) guidelines and other globally accepted guidelines. In spite of many refresher training programs, the pharmaceutical industry still faces significant variations in individual/ team efficiency and productivity. Individuals/teams given the same task, SOPs, environment and materials continue to produce significantly different results reflecting the possibility of operating on different sets of theoretical and practical information, which may stem from differing trainer, training program or training method. This study focused on using a standardized manual for training two teams A and B involved in vaccine production, as a tool to increase employee efficiency, productivity and quality, at a Livestock vaccine manufacturing company, with an objective to shorten the supply chain of vaccines (starting with Newcastle disease vaccine I-2 strain) to improve product quality, availability and affordability up to rural household level and back yard farmers. Baseline data was collected from four pre-training production batches and compared with data collected from three post-training production batches. The results showed that a tailored standardized training was effective in achieving the same level of efficiency, regardless of how late or soon the member joined the facility, and who conducted the training. The process of training staff, using a company tailored standardized manual, was shown to be successful within this company’s set up and could potentially be applied to other industries that are struggling with implementation of uniform information to their staff.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Küsters, Ralf, und Ralf Molitor. Computing Most Specific Concepts in Description Logics with Existential Restrictions. Aachen University of Technology, 2000. http://dx.doi.org/10.25368/2022.108.

Der volle Inhalt der Quelle
Annotation:
Computing the most specific concept (msc) is an inference task that can be used to support the 'bottom-up' construction of knowledge bases for KR systems based on description logics. For description logics that allow for number restrictions or existential restrictions, the msc need not exist, though. Previous work on this problem has concentrated on description logics that allow for universal value restrictions and number restrictions, but not for existential restrictions. The main new contribution of this paper is the treatment of description logics with existential restrictions. More precisely, we show that, for the description logic ALE (which allows for conjunction, universal value restrictions, existential restrictions, negation of atomic concepts) the msc of an ABox-individual only exists in case of acyclic ABoxes. For cyclic ABoxes, we show how to compute an approximation of the msc. Our approach for computing the (approximation of the) msc is based on representing concept descriptions by certain trees and ABoxes by certain graphs, and then characterizing instance relationships by homomorphisms from trees into graphs. The msc/approximation operation then mainly corresponds to unraveling the graphs into trees and translating them back into concept descriptions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kriegel, Francesco. Terminological knowledge aquisition in probalistic description logic. Technische Universität Dresden, 2018. http://dx.doi.org/10.25368/2022.239.

Der volle Inhalt der Quelle
Annotation:
For a probabilistic extension of the description logic EL⊥, we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probabilitymeasure on this graph family is present. The goal is to derive so-called concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie